K8s集群安装
Kubeadm
“kubeadm是官方社区推出的一个用于快速部署 kubernetes集群的工具。这个工具能通过两条指令完成一个 kubernetes集群的部署”
创建一个Master节点
$ kubeadm init$ kubeadm init$ kubeadm init
将一个Node节点加入到当前集群中
$ kubeadm join <Master 节点的IP和端口>$ kubeadm join <Master 节点的IP和端口>$ kubeadm join <Master 节点的IP和端口>
“所以服务器 安装Docker
跳过”
生产环境需要开启的端口
文档:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm
master节点开端口
firewall-cmd --zone=public --add-port=6443/tcp --permanentfirewall-cmd --zone=public --add-port=2379-2380/tcp --permanentfirewall-cmd --zone=public --add-port=10250/tcp --permanentfirewall-cmd --zone=public --add-port=10251/tcp --permanentfirewall-cmd --zone=public --add-port=10252/tcp --permanent#———————————————————————————————————————————————————————————————firewall-cmd --zone=public --add-port=10255/tcp --permanentfirewall-cmd --zone=public --add-port=8472/udp --permanentfirewall-cmd --zone=public --add-port=443/udp --permanentfirewall-cmd --zone=public --add-port=53/udp --permanentfirewall-cmd --zone=public --add-port=53/tcp --permanentfirewall-cmd --zone=public --add-port=9153/tcp --permanent# 开启伪装IPfirewall-cmd --permanent --add-masquerade# 仅当您还希望NodePort在控制平面IP上公开时firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent# 刷新规则systemctl restart firewalldfirewall-cmd --zone=public --add-port=6443/tcp --permanent firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent firewall-cmd --zone=public --add-port=10250/tcp --permanent firewall-cmd --zone=public --add-port=10251/tcp --permanent firewall-cmd --zone=public --add-port=10252/tcp --permanent #——————————————————————————————————————————————————————————————— firewall-cmd --zone=public --add-port=10255/tcp --permanent firewall-cmd --zone=public --add-port=8472/udp --permanent firewall-cmd --zone=public --add-port=443/udp --permanent firewall-cmd --zone=public --add-port=53/udp --permanent firewall-cmd --zone=public --add-port=53/tcp --permanent firewall-cmd --zone=public --add-port=9153/tcp --permanent # 开启伪装IP firewall-cmd --permanent --add-masquerade # 仅当您还希望NodePort在控制平面IP上公开时 firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent # 刷新规则 systemctl restart firewalldfirewall-cmd --zone=public --add-port=6443/tcp --permanent firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent firewall-cmd --zone=public --add-port=10250/tcp --permanent firewall-cmd --zone=public --add-port=10251/tcp --permanent firewall-cmd --zone=public --add-port=10252/tcp --permanent #——————————————————————————————————————————————————————————————— firewall-cmd --zone=public --add-port=10255/tcp --permanent firewall-cmd --zone=public --add-port=8472/udp --permanent firewall-cmd --zone=public --add-port=443/udp --permanent firewall-cmd --zone=public --add-port=53/udp --permanent firewall-cmd --zone=public --add-port=53/tcp --permanent firewall-cmd --zone=public --add-port=9153/tcp --permanent # 开启伪装IP firewall-cmd --permanent --add-masquerade # 仅当您还希望NodePort在控制平面IP上公开时 firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent # 刷新规则 systemctl restart firewalld
Kubesphere:https://kubesphere.com.cn/docs/zh-CN/installation/port-firewall 需要开放的端口
firewall-cmd --zone=public --add-port=2379-2380/tcp --permanentfirewall-cmd --zone=public --add-port=6443/tcp --permanentfirewall-cmd --zone=public --add-port=9099-9100/tcp --permanentfirewall-cmd --zone=public --add-port=179/tcp --permanentfirewall-cmd --zone=public --add-port=30000-32767/tcp --permanentfirewall-cmd --zone=public --add-port=10250-10258/tcp --permanentfirewall-cmd --zone=public --add-port=53/udp --permanentfirewall-cmd --zone=public --add-port=53/tcp --permanentfirewall-cmd --zone=public --add-port=111/tcp --permanentfirewall-cmd --zone=public --add-port=2379-2380/tcp --permanent firewall-cmd --zone=public --add-port=6443/tcp --permanent firewall-cmd --zone=public --add-port=9099-9100/tcp --permanent firewall-cmd --zone=public --add-port=179/tcp --permanent firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent firewall-cmd --zone=public --add-port=10250-10258/tcp --permanent firewall-cmd --zone=public --add-port=53/udp --permanent firewall-cmd --zone=public --add-port=53/tcp --permanent firewall-cmd --zone=public --add-port=111/tcp --permanentfirewall-cmd --zone=public --add-port=2379-2380/tcp --permanent firewall-cmd --zone=public --add-port=6443/tcp --permanent firewall-cmd --zone=public --add-port=9099-9100/tcp --permanent firewall-cmd --zone=public --add-port=179/tcp --permanent firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent firewall-cmd --zone=public --add-port=10250-10258/tcp --permanent firewall-cmd --zone=public --add-port=53/udp --permanent firewall-cmd --zone=public --add-port=53/tcp --permanent firewall-cmd --zone=public --add-port=111/tcp --permanent
node节点
firewall-cmd --zone=public --add-port=10250/tcp --permanentfirewall-cmd --zone=public --add-port=10255/tcp --permanentfirewall-cmd --zone=public --add-port=8472/udp --permanentfirewall-cmd --zone=public --add-port=443/udp --permanentfirewall-cmd --zone=public --add-port=30000-32767/tcp --permanentfirewall-cmd --zone=public --add-port=53/udp --permanentfirewall-cmd --zone=public --add-port=53/tcp --permanentfirewall-cmd --zone=public --add-port=9153/tcp --permanent# 开启伪装IPfirewall-cmd --permanent --add-masquerade# 说明8472/udp为flannel的通信端口443/tcp 为Kubernetes server端口# 刷新规则systemctl restart firewalldfirewall-cmd --zone=public --add-port=10250/tcp --permanent firewall-cmd --zone=public --add-port=10255/tcp --permanent firewall-cmd --zone=public --add-port=8472/udp --permanent firewall-cmd --zone=public --add-port=443/udp --permanent firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent firewall-cmd --zone=public --add-port=53/udp --permanent firewall-cmd --zone=public --add-port=53/tcp --permanent firewall-cmd --zone=public --add-port=9153/tcp --permanent # 开启伪装IP firewall-cmd --permanent --add-masquerade # 说明 8472/udp为flannel的通信端口 443/tcp 为Kubernetes server端口 # 刷新规则 systemctl restart firewalldfirewall-cmd --zone=public --add-port=10250/tcp --permanent firewall-cmd --zone=public --add-port=10255/tcp --permanent firewall-cmd --zone=public --add-port=8472/udp --permanent firewall-cmd --zone=public --add-port=443/udp --permanent firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent firewall-cmd --zone=public --add-port=53/udp --permanent firewall-cmd --zone=public --add-port=53/tcp --permanent firewall-cmd --zone=public --add-port=9153/tcp --permanent # 开启伪装IP firewall-cmd --permanent --add-masquerade # 说明 8472/udp为flannel的通信端口 443/tcp 为Kubernetes server端口 # 刷新规则 systemctl restart firewalld
把内网 IP 192.168.0.111
转向外网 IP 192.168.0.111
# 主master做了连个转发,因为master会有内网去访问别的节点,所以要重新转发到外网,而且刷新端口规则后就会失效,重新配置#node1节点的iptables -t nat -A OUTPUT -d xx.xx.xx.xx -j DNAT --to-destinationxx.xx.xx.xx# 删除 把 -A 改成 -D# 保存service iptables save# 如果报错 The service command supports only basic LSB actions (start....是因为还没有安装服务# 安装yum install iptables-services# 开机启动systemctl enable iptables.service# 查看规则iptables -nL --line-number# 主master做了连个转发,因为master会有内网去访问别的节点,所以要重新转发到外网,而且刷新端口规则后就会失效,重新配置 #node1节点的 iptables -t nat -A OUTPUT -d xx.xx.xx.xx -j DNAT --to-destinationxx.xx.xx.xx # 删除 把 -A 改成 -D # 保存 service iptables save # 如果报错 The service command supports only basic LSB actions (start....是因为还没有安装服务 # 安装 yum install iptables-services # 开机启动 systemctl enable iptables.service # 查看规则 iptables -nL --line-number# 主master做了连个转发,因为master会有内网去访问别的节点,所以要重新转发到外网,而且刷新端口规则后就会失效,重新配置 #node1节点的 iptables -t nat -A OUTPUT -d xx.xx.xx.xx -j DNAT --to-destinationxx.xx.xx.xx # 删除 把 -A 改成 -D # 保存 service iptables save # 如果报错 The service command supports only basic LSB actions (start....是因为还没有安装服务 # 安装 yum install iptables-services # 开机启动 systemctl enable iptables.service # 查看规则 iptables -nL --line-number
添加阿里与Yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOFcat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFcat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
更多详情见:https://developer.aliyun.com/mirror/kubernetes)
安装kubeadm,kubelet和kubectl
检查yum原中是否有kube相关的安装原
yum list|grep kubeyum list|grep kubeyum list|grep kube
安装
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
开机启动
systemctl enable kubelet && systemctl start kubeletsystemctl enable kubelet && systemctl start kubeletsystemctl enable kubelet && systemctl start kubelet
查看kubelet的状态
systemctl status kubelet# 都是无法启动的,请安装文档继续配置systemctl status kubelet# 都是无法启动的,请安装文档继续配置systemctl status kubelet# 都是无法启动的,请安装文档继续配置
查看kubelet版本
kubelet --versionkubelet --versionkubelet --version
部署k8s-master
master节点初始化(可以直接初始化,但是会比较慢)
“在Master节点上,创建并执行master_images.sh文件”
vim master_images.sh# 将下面内容复制到该文件中# 这时候这个文件还没有执行权限chmod 700 master_images.sh# 执行文件./master_images.sh# 执行完成后 可查看镜像docker imagesvim master_images.sh # 将下面内容复制到该文件中 # 这时候这个文件还没有执行权限 chmod 700 master_images.sh # 执行文件 ./master_images.sh # 执行完成后 可查看镜像 docker imagesvim master_images.sh # 将下面内容复制到该文件中 # 这时候这个文件还没有执行权限 chmod 700 master_images.sh # 执行文件 ./master_images.sh # 执行完成后 可查看镜像 docker images
master_images.sh
#!/bin/bashimages=(kube-apiserver:v1.17.3kube-proxy:v1.17.3kube-controller-manager:v1.17.3kube-scheduler:v1.17.3coredns:1.6.5etcd:3.4.3-0pause:3.1)for imageName in ${images[@]} ; dodocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageNamedone#!/bin/bash images=( kube-apiserver:v1.17.3 kube-proxy:v1.17.3 kube-controller-manager:v1.17.3 kube-scheduler:v1.17.3 coredns:1.6.5 etcd:3.4.3-0 pause:3.1 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName # docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done#!/bin/bash images=( kube-apiserver:v1.17.3 kube-proxy:v1.17.3 kube-controller-manager:v1.17.3 kube-scheduler:v1.17.3 coredns:1.6.5 etcd:3.4.3-0 pause:3.1 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName # docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done
初始化kubeadm
$ kubeadm init \--apiserver-advertise-address=xx.xx.xx.xx \--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \--kubernetes-version v1.17.3 \--service-cidr=10.96.0.0/16 \--pod-network-cidr=10.244.0.0/16# 命令解释-----------------------------------------------$ kubeadm init \ #初始化--apiserver-advertise-address=xx.xx.xx.xx \ # 指定apiserver的地址也就是master的地址--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ # 指定用阿里云镜像--kubernetes-version v1.17.3 \ #指定Kubernetes的版本--service-cidr=10.96.0.0/16 \ #--pod-network-cidr=10.244.0.0/16# 清理下第一次初始化的文件kubeadm reset$ kubeadm init \ --apiserver-advertise-address=xx.xx.xx.xx \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16 # 命令解释----------------------------------------------- $ kubeadm init \ #初始化 --apiserver-advertise-address=xx.xx.xx.xx \ # 指定apiserver的地址也就是master的地址 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ # 指定用阿里云镜像 --kubernetes-version v1.17.3 \ #指定Kubernetes的版本 --service-cidr=10.96.0.0/16 \ # --pod-network-cidr=10.244.0.0/16 # 清理下第一次初始化的文件 kubeadm reset$ kubeadm init \ --apiserver-advertise-address=xx.xx.xx.xx \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16 # 命令解释----------------------------------------------- $ kubeadm init \ #初始化 --apiserver-advertise-address=xx.xx.xx.xx \ # 指定apiserver的地址也就是master的地址 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ # 指定用阿里云镜像 --kubernetes-version v1.17.3 \ #指定Kubernetes的版本 --service-cidr=10.96.0.0/16 \ # --pod-network-cidr=10.244.0.0/16 # 清理下第一次初始化的文件 kubeadm reset
⚠️生产模式初始化遇到的坑(阿里云)
错误信息
Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
“etcd可以通过公网ip部署集群,而且可以运行应用,但是集群节点信息里是没有公网ip的,而且貌似k8s是让kubelet直接与apiserver通过内网通信的,使用kubectl log等命令时还是用的内网IP,所以会提示timeout。要想真正实现公网ip部署k8s还是配置iptables进行重定向或其它更专业的方式。”
我这里的解决方案
“自己琢磨出来,不能用外网初始化,换成内网试试”
准备两个终端,在kubeadm初始化过程中对etcd进行修改
⚠️⚠️⚠️一定要在初始化中,不然初始化的时候会覆盖配置
# 执行kubeadm init# 用另外一个终端vi /etc/kubernetes/manifests/etcd.yaml#------------修改前,只复制了重点--------------------containers:- command:- etcd- --advertise-client-urls=https://xx.xx.xx.xx:2379- --cert-file=/etc/kubernetes/pki/etcd/server.crt- --client-cert-auth=true- --data-dir=/var/lib/etcd- --initial-advertise-peer-urls=https://xx.xx.xx.xx:2380- --initial-cluster=master=https://xx.xx.xx.xx:2380- --key-file=/etc/kubernetes/pki/etcd/server.key- --listen-client-urls=https://127.0.0.1:2379,https://xx.xx.xx.xx:2379# 修改这里- --listen-metrics-urls=http://127.0.0.1:2381- --listen-peer-urls=https://xx.xx.xx.xx:2380 # 修改这里- --name=master- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt- --peer-client-cert-auth=true- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt- --snapshot-count=10000- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt#--------------------修改后-----------------------------containers:- command:- etcd- --advertise-client-urls=https://xx.xx.xx.xx:2379- --cert-file=/etc/kubernetes/pki/etcd/server.crt- --client-cert-auth=true- --data-dir=/var/lib/etcd- --initial-advertise-peer-urls=https://xx.xx.xx.xx:2380- --initial-cluster=master=https://xx.xx.xx.xx:2380- --key-file=/etc/kubernetes/pki/etcd/server.key- --listen-client-urls=https://127.0.0.1:2379 # 改成这样- --listen-metrics-urls=http://127.0.0.1:2381- --listen-peer-urls=https://127.0.0.1:2380 # 改成这样- --name=master- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt- --peer-client-cert-auth=true- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt- --snapshot-count=10000- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt# 执行kubeadm init # 用另外一个终端 vi /etc/kubernetes/manifests/etcd.yaml #------------修改前,只复制了重点-------------------- containers: - command: - etcd - --advertise-client-urls=https://xx.xx.xx.xx:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --initial-advertise-peer-urls=https://xx.xx.xx.xx:2380 - --initial-cluster=master=https://xx.xx.xx.xx:2380 - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://127.0.0.1:2379,https://xx.xx.xx.xx:2379# 修改这里 - --listen-metrics-urls=http://127.0.0.1:2381 - --listen-peer-urls=https://xx.xx.xx.xx:2380 # 修改这里 - --name=master - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt #--------------------修改后----------------------------- containers: - command: - etcd - --advertise-client-urls=https://xx.xx.xx.xx:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --initial-advertise-peer-urls=https://xx.xx.xx.xx:2380 - --initial-cluster=master=https://xx.xx.xx.xx:2380 - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://127.0.0.1:2379 # 改成这样 - --listen-metrics-urls=http://127.0.0.1:2381 - --listen-peer-urls=https://127.0.0.1:2380 # 改成这样 - --name=master - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt# 执行kubeadm init # 用另外一个终端 vi /etc/kubernetes/manifests/etcd.yaml #------------修改前,只复制了重点-------------------- containers: - command: - etcd - --advertise-client-urls=https://xx.xx.xx.xx:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --initial-advertise-peer-urls=https://xx.xx.xx.xx:2380 - --initial-cluster=master=https://xx.xx.xx.xx:2380 - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://127.0.0.1:2379,https://xx.xx.xx.xx:2379# 修改这里 - --listen-metrics-urls=http://127.0.0.1:2381 - --listen-peer-urls=https://xx.xx.xx.xx:2380 # 修改这里 - --name=master - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt #--------------------修改后----------------------------- containers: - command: - etcd - --advertise-client-urls=https://xx.xx.xx.xx:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --initial-advertise-peer-urls=https://xx.xx.xx.xx:2380 - --initial-cluster=master=https://xx.xx.xx.xx:2380 - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://127.0.0.1:2379 # 改成这样 - --listen-metrics-urls=http://127.0.0.1:2381 - --listen-peer-urls=https://127.0.0.1:2380 # 改成这样 - --name=master - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
“由于默认拉取镜像地址k8s.cr.io国内无法访问,这里指定阿里云仓库地址。可以手动按照我们的images.sh先拉取镜像。
地址变为:registry.aliyuncs.com/googole_containers也可以。
科普:无类别域间路由(Classless Inter-Domain Routing 、CIDR)是一个用于给用户分配IP地址以及在互联网上有效第路由IP数据包的对IP地址进行归类的方法。
拉取可能失败,需要下载镜像。”
执行完成后控制台打印结果
Your Kubernetes control-plane has initialized successfully!#您的Kubernetes控制面板已成功初始化!To start using your cluster, you need to run the following as a regular user:# 要开始使用群集,您需要以普通用户身份运行以下命令:# 执行命令添加权限等等mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:# 这里一定要留个备份,意思说,您可以通过在每个节点上以超级用户身份运行以下命令来加入让任意多的node节点加入masterkubeadm join 172.26.215.100:6443 --token 3788fx.y6bp0nvq8ejaazxo \--discovery-token-ca-cert-hash sha256:865b99aee73005e810f76e9a4a42de5836d570e9a3f8b9142c7306e897d98b2aYour Kubernetes control-plane has initialized successfully! #您的Kubernetes控制面板已成功初始化! To start using your cluster, you need to run the following as a regular user: # 要开始使用群集,您需要以普通用户身份运行以下命令: # 执行命令添加权限等等 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: # 这里一定要留个备份,意思说,您可以通过在每个节点上以超级用户身份运行以下命令来加入让任意多的node节点加入master kubeadm join 172.26.215.100:6443 --token 3788fx.y6bp0nvq8ejaazxo \ --discovery-token-ca-cert-hash sha256:865b99aee73005e810f76e9a4a42de5836d570e9a3f8b9142c7306e897d98b2aYour Kubernetes control-plane has initialized successfully! #您的Kubernetes控制面板已成功初始化! To start using your cluster, you need to run the following as a regular user: # 要开始使用群集,您需要以普通用户身份运行以下命令: # 执行命令添加权限等等 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: # 这里一定要留个备份,意思说,您可以通过在每个节点上以超级用户身份运行以下命令来加入让任意多的node节点加入master kubeadm join 172.26.215.100:6443 --token 3788fx.y6bp0nvq8ejaazxo \ --discovery-token-ca-cert-hash sha256:865b99aee73005e810f76e9a4a42de5836d570e9a3f8b9142c7306e897d98b2a
测试Kubectl(主节点执行)
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configmkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configmkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
详细部署文档:https://kubernetes.io/docs/concepts/cluster-administration/addons
#获取所有节点$ kubectl get nodes# 目前Master状态为notready。等待网络加入完成即可。$ journalctl -u kubelet #查看kubelet日志#获取所有节点 $ kubectl get nodes # 目前Master状态为notready。等待网络加入完成即可。 $ journalctl -u kubelet #查看kubelet日志#获取所有节点 $ kubectl get nodes # 目前Master状态为notready。等待网络加入完成即可。 $ journalctl -u kubelet #查看kubelet日志
加入节点的token令牌(根据自己生成的走,如果不小心清屏了,下面有解决方案token默认两个小时有效)
kubeadm join 10.0.2.15:6443 --token ppzpi7.qrb6jaxqwaxtnwss \--discovery-token-ca-cert-hash sha256:14939d558594509c33c551bdbb75578b19fabbd7c95fb3b8e15a5a1aec378613kubeadm join 10.0.2.15:6443 --token ppzpi7.qrb6jaxqwaxtnwss \ --discovery-token-ca-cert-hash sha256:14939d558594509c33c551bdbb75578b19fabbd7c95fb3b8e15a5a1aec378613kubeadm join 10.0.2.15:6443 --token ppzpi7.qrb6jaxqwaxtnwss \ --discovery-token-ca-cert-hash sha256:14939d558594509c33c551bdbb75578b19fabbd7c95fb3b8e15a5a1aec378613
如果token过期了怎么办
kubeadm token create --print-join-command # 重新创建一个kubeadm token create --ttl0 --print-join-command # 创建一个永久的kubeadm token create --print-join-command # 重新创建一个 kubeadm token create --ttl0 --print-join-command # 创建一个永久的kubeadm token create --print-join-command # 重新创建一个 kubeadm token create --ttl0 --print-join-command # 创建一个永久的
安装POD网络插件(CNI)
“我们选用Flannel 是可与Kubernetes一起使用的覆盖网络提供商。”
官网:https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md
在master节点上执行按照POD网络插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
“以上地址可能被墙,可以直接获取本地已经下载的flannel.yml运行即”
[root@k8s-node1 k8s]# kubectl apply -f kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds-amd64 createddaemonset.apps/kube-flannel-ds-arm64 createddaemonset.apps/kube-flannel-ds-arm createddaemonset.apps/kube-flannel-ds-ppc64le createddaemonset.apps/kube-flannel-ds-s390x created#--------------------------如果想删除这个文件应用的配置----------------------------------[root@k8s-node1 k8s]# kubectl delete -f kube-flannel.yml[root@k8s-node1 k8s]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created #--------------------------如果想删除这个文件应用的配置---------------------------------- [root@k8s-node1 k8s]# kubectl delete -f kube-flannel.yml[root@k8s-node1 k8s]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created #--------------------------如果想删除这个文件应用的配置---------------------------------- [root@k8s-node1 k8s]# kubectl delete -f kube-flannel.yml
“同时flannel.yml中指定的images访问不到可以去docker hub找一个wget yml地址
vi 修改yml 所有amd64的地址修改了即可,等待大约3分钟”
查看名称空间
[root@k8s-node1 k8s]# kubectl get nsNAME STATUS AGEdefault Active 28mkube-node-lease Active 28mkube-public Active 28mkube-system Active 28m[root@k8s-node1 k8s]# kubectl get ns NAME STATUS AGE default Active 28m kube-node-lease Active 28m kube-public Active 28m kube-system Active 28m[root@k8s-node1 k8s]# kubectl get ns NAME STATUS AGE default Active 28m kube-node-lease Active 28m kube-public Active 28m kube-system Active 28m
查看指定名称空间的pods
[root@k8s-node1 k8s]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-7f9c544f75-fjqdn 1/1 Running 0 30mcoredns-7f9c544f75-rmk2c 1/1 Running 0 30metcd-k8s-node1 1/1 Running 0 30mkube-apiserver-k8s-node1 1/1 Running 0 30mkube-controller-manager-k8s-node1 1/1 Running 0 30mkube-flannel-ds-amd64-m9cnc 1/1 Running 0 6m43skube-proxy-2lwtr 1/1 Running 0 30mkube-scheduler-k8s-node1 1/1 Running 0 30m[root@k8s-node1 k8s]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f9c544f75-fjqdn 1/1 Running 0 30m coredns-7f9c544f75-rmk2c 1/1 Running 0 30m etcd-k8s-node1 1/1 Running 0 30m kube-apiserver-k8s-node1 1/1 Running 0 30m kube-controller-manager-k8s-node1 1/1 Running 0 30m kube-flannel-ds-amd64-m9cnc 1/1 Running 0 6m43s kube-proxy-2lwtr 1/1 Running 0 30m kube-scheduler-k8s-node1 1/1 Running 0 30m[root@k8s-node1 k8s]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7f9c544f75-fjqdn 1/1 Running 0 30m coredns-7f9c544f75-rmk2c 1/1 Running 0 30m etcd-k8s-node1 1/1 Running 0 30m kube-apiserver-k8s-node1 1/1 Running 0 30m kube-controller-manager-k8s-node1 1/1 Running 0 30m kube-flannel-ds-amd64-m9cnc 1/1 Running 0 6m43s kube-proxy-2lwtr 1/1 Running 0 30m kube-scheduler-k8s-node1 1/1 Running 0 30m
获取所有名称名称空间的POD
[root@k8s-node1 k8s]# kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-7f9c544f75-fjqdn 1/1 Running 0 29mkube-system coredns-7f9c544f75-rmk2c 1/1 Running 0 29mkube-system etcd-k8s-node1 1/1 Running 0 28mkube-system kube-apiserver-k8s-node1 1/1 Running 0 28mkube-system kube-controller-manager-k8s-node1 1/1 Running 0 28mkube-system kube-flannel-ds-amd64-m9cnc 1/1 Running 0 4m50skube-system kube-proxy-2lwtr 1/1 Running 0 29mkube-system kube-scheduler-k8s-node1 1/1 Running 0 28m[root@k8s-node1 k8s]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7f9c544f75-fjqdn 1/1 Running 0 29m kube-system coredns-7f9c544f75-rmk2c 1/1 Running 0 29m kube-system etcd-k8s-node1 1/1 Running 0 28m kube-system kube-apiserver-k8s-node1 1/1 Running 0 28m kube-system kube-controller-manager-k8s-node1 1/1 Running 0 28m kube-system kube-flannel-ds-amd64-m9cnc 1/1 Running 0 4m50s kube-system kube-proxy-2lwtr 1/1 Running 0 29m kube-system kube-scheduler-k8s-node1 1/1 Running 0 28m[root@k8s-node1 k8s]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7f9c544f75-fjqdn 1/1 Running 0 29m kube-system coredns-7f9c544f75-rmk2c 1/1 Running 0 29m kube-system etcd-k8s-node1 1/1 Running 0 28m kube-system kube-apiserver-k8s-node1 1/1 Running 0 28m kube-system kube-controller-manager-k8s-node1 1/1 Running 0 28m kube-system kube-flannel-ds-amd64-m9cnc 1/1 Running 0 4m50s kube-system kube-proxy-2lwtr 1/1 Running 0 29m kube-system kube-scheduler-k8s-node1 1/1 Running 0 28m
“⚠️kube-flannel-ds-amd64-m9cnc 一定要是运行状态,才可以操作接下来的步骤,如果未运行,请等待”
获取master上所以节点
[root@k8s-node1 k8s]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-node1 Ready master 35m v1.17.3 #status为ready才能够执行下面的命令[root@k8s-node1 k8s]#[root@k8s-node1 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 35m v1.17.3 #status为ready才能够执行下面的命令 [root@k8s-node1 k8s]#[root@k8s-node1 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 35m v1.17.3 #status为ready才能够执行下面的命令 [root@k8s-node1 k8s]#
让别的节点加入master
“⚠️ 复制自己的初始化kubeadm控制台有打印”
kubeadm join 10.0.2.15:6443 --token ppzpi7.qrb6jaxqwaxtnwss \--discovery-token-ca-cert-hash sha256:14939d558594509c33c551bdbb75578b19fabbd7c95fb3b8e15a5a1aec378613kubeadm join 10.0.2.15:6443 --token ppzpi7.qrb6jaxqwaxtnwss \ --discovery-token-ca-cert-hash sha256:14939d558594509c33c551bdbb75578b19fabbd7c95fb3b8e15a5a1aec378613kubeadm join 10.0.2.15:6443 --token ppzpi7.qrb6jaxqwaxtnwss \ --discovery-token-ca-cert-hash sha256:14939d558594509c33c551bdbb75578b19fabbd7c95fb3b8e15a5a1aec378613
[root@k8s-node1 opt]# kubectl get nodes;NAME STATUS ROLES AGE VERSIONk8s-node1 Ready master 47m v1.17.3k8s-node2 NotReady <none> 75s v1.17.3k8s-node3 NotReady <none> 76s v1.17.3[root@k8s-node1 opt]# kubectl get nodes; NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 47m v1.17.3 k8s-node2 NotReady <none> 75s v1.17.3 k8s-node3 NotReady <none> 76s v1.17.3[root@k8s-node1 opt]# kubectl get nodes; NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 47m v1.17.3 k8s-node2 NotReady <none> 75s v1.17.3 k8s-node3 NotReady <none> 76s v1.17.3
监控pod进度
watch kubectl get pod -n kube-system -o widewatch kubectl get pod -n kube-system -o widewatch kubectl get pod -n kube-system -o wide
等到所有的status都变为running状态后,再次查看节点信息:
[root@k8s-node1 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-node1 Ready master 3h50m v1.17.3k8s-node2 Ready <none> 3h3m v1.17.3k8s-node3 Ready <none> 3h3m v1.17.3[root@k8s-node1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 3h50m v1.17.3 k8s-node2 Ready <none> 3h3m v1.17.3 k8s-node3 Ready <none> 3h3m v1.17.3[root@k8s-node1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 3h50m v1.17.3 k8s-node2 Ready <none> 3h3m v1.17.3 k8s-node3 Ready <none> 3h3m v1.17.3
如果出现一直无法加入节点
# 首先先检查服务直接是否互相ping通# 查看node节点容器运行状态docker ps -a# 如果发现quay.io/coreos/flannel 都没有运行,应该是四个容器在跑# master节点删除node1节点重新加入kubectl delete node k8s-node2# 删除(node2出错的节点)所有容器docker ps -qa | xargs docker rm -f# 删除旧的配置文件rm -f /etc/kubernetes/kubelet.conf# 删除旧的ca文件rm -f /etc/kubernetes/pki/ca.crt# 节点重启服务systemctl restart kubelet docker# 重新注册kubeadm join 10.0.2.15:6443 --token 6h7qvv.oycchgw0h1seecl4 --discovery-token-ca-cert-hash sha256:59243686366bfd2161d7799afd386950a74884c55e41a14a9389b0d0473d091f#-----------------node节点加入master报错----------------------------error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition#解决方案kubeadm reset# 根据提示执行iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X# 重新注册kubeadm join。。。。。rm -f /etc/kubernetes/kubelet.confrm -f /etc/kubernetes/pki/ca.crtsystemctl restart kubelet docker# 首先先检查服务直接是否互相ping通 # 查看node节点容器运行状态 docker ps -a # 如果发现quay.io/coreos/flannel 都没有运行,应该是四个容器在跑 # master节点删除node1节点重新加入 kubectl delete node k8s-node2 # 删除(node2出错的节点)所有容器 docker ps -qa | xargs docker rm -f # 删除旧的配置文件 rm -f /etc/kubernetes/kubelet.conf # 删除旧的ca文件 rm -f /etc/kubernetes/pki/ca.crt # 节点重启服务 systemctl restart kubelet docker # 重新注册 kubeadm join 10.0.2.15:6443 --token 6h7qvv.oycchgw0h1seecl4 --discovery-token-ca-cert-hash sha256:59243686366bfd2161d7799afd386950a74884c55e41a14a9389b0d0473d091f #-----------------node节点加入master报错---------------------------- error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition #解决方案 kubeadm reset # 根据提示执行 iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X # 重新注册 kubeadm join。。。。。 rm -f /etc/kubernetes/kubelet.conf rm -f /etc/kubernetes/pki/ca.crt systemctl restart kubelet docker# 首先先检查服务直接是否互相ping通 # 查看node节点容器运行状态 docker ps -a # 如果发现quay.io/coreos/flannel 都没有运行,应该是四个容器在跑 # master节点删除node1节点重新加入 kubectl delete node k8s-node2 # 删除(node2出错的节点)所有容器 docker ps -qa | xargs docker rm -f # 删除旧的配置文件 rm -f /etc/kubernetes/kubelet.conf # 删除旧的ca文件 rm -f /etc/kubernetes/pki/ca.crt # 节点重启服务 systemctl restart kubelet docker # 重新注册 kubeadm join 10.0.2.15:6443 --token 6h7qvv.oycchgw0h1seecl4 --discovery-token-ca-cert-hash sha256:59243686366bfd2161d7799afd386950a74884c55e41a14a9389b0d0473d091f #-----------------node节点加入master报错---------------------------- error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition #解决方案 kubeadm reset # 根据提示执行 iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X # 重新注册 kubeadm join。。。。。 rm -f /etc/kubernetes/kubelet.conf rm -f /etc/kubernetes/pki/ca.crt systemctl restart kubelet docker
如果还不行
官方仓库地址手动拉去镜像:https://github.com/coreos/flannel/releases
导入docker容器中
# 将容器长传到服务器docker load < flanneld-v0.12.0-amd64.dockerdocker load < flanneld-v0.12.0-arm64.dockerdocker load < flanneld-v0.12.0-arm.dockerdocker load < flanneld-v0.12.0-ppc64le.dockerdocker load < flanneld-v0.12.0-s390x.docker# 将容器长传到服务器 docker load < flanneld-v0.12.0-amd64.docker docker load < flanneld-v0.12.0-arm64.docker docker load < flanneld-v0.12.0-arm.docker docker load < flanneld-v0.12.0-ppc64le.docker docker load < flanneld-v0.12.0-s390x.docker# 将容器长传到服务器 docker load < flanneld-v0.12.0-amd64.docker docker load < flanneld-v0.12.0-arm64.docker docker load < flanneld-v0.12.0-arm.docker docker load < flanneld-v0.12.0-ppc64le.docker docker load < flanneld-v0.12.0-s390x.docker
入门操作kubernetes集群
部署
在主节点上部署一个tomcat
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8
获取所有的资源
[root@k8s-node1 k8s]# kubectl get allNAME READY STATUS RESTARTS AGEpod/tomcat6-7b84fb5fdc-cfd8g 0/1 ContainerCreating 0 41sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70mNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/tomcat6 0/1 1 0 41sNAME DESIRED CURRENT READY AGEreplicaset.apps/tomcat6-7b84fb5fdc 1 1 0 41s[root@k8s-node1 k8s]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-7b84fb5fdc-cfd8g 0/1 ContainerCreating 0 41s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 0/1 1 0 41s NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-7b84fb5fdc 1 1 0 41s[root@k8s-node1 k8s]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat6-7b84fb5fdc-cfd8g 0/1 ContainerCreating 0 41s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat6 0/1 1 0 41s NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat6-7b84fb5fdc 1 1 0 41s
获取到tomcat完整部署信息
[root@k8s-node1 k8s]# kubectl get all -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 114s 10.244.2.2 k8s-node2 <none> <none>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 71m <none>NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORdeployment.apps/tomcat6 1/1 1 1 114s tomcat tomcat:6.0.53-jre8 app=tomcat6NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTORreplicaset.apps/tomcat6-7b84fb5fdc 1 1 1 114s tomcat tomcat:6.0.53-jre8 app=tomcat6,pod-template-hash=7b84fb5fdc[root@k8s-node1 k8s]# kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 114s 10.244.2.2 k8s-node2 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 71m <none> NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/tomcat6 1/1 1 1 114s tomcat tomcat:6.0.53-jre8 app=tomcat6 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/tomcat6-7b84fb5fdc 1 1 1 114s tomcat tomcat:6.0.53-jre8 app=tomcat6,pod-template-hash=7b84fb5fdc[root@k8s-node1 k8s]# kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 114s 10.244.2.2 k8s-node2 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 71m <none> NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/tomcat6 1/1 1 1 114s tomcat tomcat:6.0.53-jre8 app=tomcat6 NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/tomcat6-7b84fb5fdc 1 1 1 114s tomcat tomcat:6.0.53-jre8 app=tomcat6,pod-template-hash=7b84fb5fdc
获取默认名称空间的pod
[root@k8s-node1 /]# kubectl get podsNAME READY STATUS RESTARTS AGEtomcat-6cfdcff7d7-8rm7c 1/1 Running 0 2m52s[root@k8s-node1 /]# kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEdefault tomcat-6cfdcff7d7-8rm7c 1/1 Running 0 5m17skube-system coredns-7f9c544f75-7qpl8 1/1 Running 0 68mkube-system coredns-7f9c544f75-xjkv9 1/1 Running 0 68mkube-system etcd-k8s-node1 1/1 Running 1 68mkube-system kube-apiserver-k8s-node1 1/1 Running 1 68mkube-system kube-controller-manager-k8s-node1 1/1 Running 1 68mkube-system kube-flannel-ds-amd64-6bf8d 1/1 Running 0 65mkube-system kube-flannel-ds-amd64-dj8x2 1/1 Running 0 38mkube-system kube-flannel-ds-amd64-l7jtz 1/1 Running 0 35mkube-system kube-proxy-fnmqw 1/1 Running 0 38mkube-system kube-proxy-m8dn5 1/1 Running 0 35mkube-system kube-proxy-mj7qf 1/1 Running 1 68mkube-system kube-scheduler-k8s-node1 1/1 Running 1 68m[root@k8s-node1 /]# kubectl get pods NAME READY STATUS RESTARTS AGE tomcat-6cfdcff7d7-8rm7c 1/1 Running 0 2m52s [root@k8s-node1 /]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default tomcat-6cfdcff7d7-8rm7c 1/1 Running 0 5m17s kube-system coredns-7f9c544f75-7qpl8 1/1 Running 0 68m kube-system coredns-7f9c544f75-xjkv9 1/1 Running 0 68m kube-system etcd-k8s-node1 1/1 Running 1 68m kube-system kube-apiserver-k8s-node1 1/1 Running 1 68m kube-system kube-controller-manager-k8s-node1 1/1 Running 1 68m kube-system kube-flannel-ds-amd64-6bf8d 1/1 Running 0 65m kube-system kube-flannel-ds-amd64-dj8x2 1/1 Running 0 38m kube-system kube-flannel-ds-amd64-l7jtz 1/1 Running 0 35m kube-system kube-proxy-fnmqw 1/1 Running 0 38m kube-system kube-proxy-m8dn5 1/1 Running 0 35m kube-system kube-proxy-mj7qf 1/1 Running 1 68m kube-system kube-scheduler-k8s-node1 1/1 Running 1 68m[root@k8s-node1 /]# kubectl get pods NAME READY STATUS RESTARTS AGE tomcat-6cfdcff7d7-8rm7c 1/1 Running 0 2m52s [root@k8s-node1 /]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default tomcat-6cfdcff7d7-8rm7c 1/1 Running 0 5m17s kube-system coredns-7f9c544f75-7qpl8 1/1 Running 0 68m kube-system coredns-7f9c544f75-xjkv9 1/1 Running 0 68m kube-system etcd-k8s-node1 1/1 Running 1 68m kube-system kube-apiserver-k8s-node1 1/1 Running 1 68m kube-system kube-controller-manager-k8s-node1 1/1 Running 1 68m kube-system kube-flannel-ds-amd64-6bf8d 1/1 Running 0 65m kube-system kube-flannel-ds-amd64-dj8x2 1/1 Running 0 38m kube-system kube-flannel-ds-amd64-l7jtz 1/1 Running 0 35m kube-system kube-proxy-fnmqw 1/1 Running 0 38m kube-system kube-proxy-m8dn5 1/1 Running 0 35m kube-system kube-proxy-mj7qf 1/1 Running 1 68m kube-system kube-scheduler-k8s-node1 1/1 Running 1 68m
从前面看到tomcat部署在Node2上,现在模拟因为各种原因宕机的情况,将node2关闭电源,观察情况。
[root@k8s-node1 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-node1 Ready master 4h4m v1.17.3k8s-node2 Ready <none> 3h18m v1.17.3k8s-node3 NotReady <none> 3h18m v1.17.3[root@k8s-node1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 4h4m v1.17.3 k8s-node2 Ready <none> 3h18m v1.17.3 k8s-node3 NotReady <none> 3h18m v1.17.3[root@k8s-node1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready master 4h4m v1.17.3 k8s-node2 Ready <none> 3h18m v1.17.3 k8s-node3 NotReady <none> 3h18m v1.17.3
“他会从别的节点重新拉取一份重新部署”
[root@k8s-node1 ~]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 177m 10.244.2.2 k8s-node2 <none> <none>[root@k8s-node1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 177m 10.244.2.2 k8s-node2 <none> <none>[root@k8s-node1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat6-7b84fb5fdc-cfd8g 1/1 Running 0 177m 10.244.2.2 k8s-node2 <none> <none>
暴露nginx访问
在master上执行
kubectl expose deployment tomcat --port=80 --target-port=8080 --type=NodePort# pod 暴露的端口 是80 而 pod容器里面的暴露的端口是 8080 以NodePort方式暴露服务kubectl expose deployment tomcat --port=80 --target-port=8080 --type=NodePort # pod 暴露的端口 是80 而 pod容器里面的暴露的端口是 8080 以NodePort方式暴露服务kubectl expose deployment tomcat --port=80 --target-port=8080 --type=NodePort # pod 暴露的端口 是80 而 pod容器里面的暴露的端口是 8080 以NodePort方式暴露服务
“pod的80映射容器的8080;server会带来pod的80”
查看服务
[root@k8s-node1 /]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 100mtomcat NodePort 10.96.133.27 <none> 80:30059/TCP 33s[root@k8s-node1 /]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 100m tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 33s[root@k8s-node1 /]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 100m tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 33s
[root@k8s-node1 /]# kubectl get svc -o wideNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 100m <none>tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 73s app=tomcat[root@k8s-node1 /]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 100m <none> tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 73s app=tomcat[root@k8s-node1 /]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 100m <none> tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 73s app=tomcat
“通过随机分配的端口号,就可以访问服务 http://192.168.56.100:30059/”
动态扩容测试
# 将之前部署的tomcat复制三份[root@k8s-node1 /]# kubectl scale --replicas=3 deployment tomcatdeployment.apps/tomcat scaled# 查看pod详情[root@k8s-node1 /]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStomcat-6cfdcff7d7-8rm7c 1/1 Running 0 43m 10.244.2.2 k8s-node3 <none> <none>tomcat-6cfdcff7d7-9g4n9 1/1 Running 0 85s 10.244.1.2 k8s-node2 <none> <none>tomcat-6cfdcff7d7-vbg8j 1/1 Running 0 85s 10.244.2.3 k8s-node3 <none> <none># 将之前部署的tomcat复制三份 [root@k8s-node1 /]# kubectl scale --replicas=3 deployment tomcat deployment.apps/tomcat scaled # 查看pod详情 [root@k8s-node1 /]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat-6cfdcff7d7-8rm7c 1/1 Running 0 43m 10.244.2.2 k8s-node3 <none> <none> tomcat-6cfdcff7d7-9g4n9 1/1 Running 0 85s 10.244.1.2 k8s-node2 <none> <none> tomcat-6cfdcff7d7-vbg8j 1/1 Running 0 85s 10.244.2.3 k8s-node3 <none> <none># 将之前部署的tomcat复制三份 [root@k8s-node1 /]# kubectl scale --replicas=3 deployment tomcat deployment.apps/tomcat scaled # 查看pod详情 [root@k8s-node1 /]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat-6cfdcff7d7-8rm7c 1/1 Running 0 43m 10.244.2.2 k8s-node3 <none> <none> tomcat-6cfdcff7d7-9g4n9 1/1 Running 0 85s 10.244.1.2 k8s-node2 <none> <none> tomcat-6cfdcff7d7-vbg8j 1/1 Running 0 85s 10.244.2.3 k8s-node3 <none> <none>
查询部署列表
[root@k8s-node1 /]# kubectl get deploymentNAME READY UP-TO-DATE AVAILABLE AGEtomcat 3/3 3 3 44m[root@k8s-node1 /]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat 3/3 3 3 44m[root@k8s-node1 /]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE tomcat 3/3 3 3 44m
缩容
[root@k8s-node1 /]# kubectl scale --replicas=1 deployment tomcatdeployment.apps/tomcat scaled[root@k8s-node1 /]# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStomcat-6cfdcff7d7-9g4n9 1/1 Running 0 4m6s 10.244.1.2 k8s-node2 <none> <none>[root@k8s-node1 /]# kubectl scale --replicas=1 deployment tomcat deployment.apps/tomcat scaled [root@k8s-node1 /]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat-6cfdcff7d7-9g4n9 1/1 Running 0 4m6s 10.244.1.2 k8s-node2 <none> <none>[root@k8s-node1 /]# kubectl scale --replicas=1 deployment tomcat deployment.apps/tomcat scaled [root@k8s-node1 /]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES tomcat-6cfdcff7d7-9g4n9 1/1 Running 0 4m6s 10.244.1.2 k8s-node2 <none> <none>
删除
#查看所有资源[root@k8s-node1 /]# kubectl get allNAME READY STATUS RESTARTS AGEpod/tomcat-6cfdcff7d7-9g4n9 1/1 Running 0 6m17sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 111mservice/tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 11mNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/tomcat 1/1 1 1 47mNAME DESIRED CURRENT READY AGEreplicaset.apps/tomcat-6cfdcff7d7 1 1 1 47m#删除deployment.apps/tomcat[root@k8s-node1 /]# kubectl delete deployment.apps/tomcatdeployment.apps "tomcat" deleted#查看剩余的资源[root@k8s-node1 /]# kubectl get allNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 115mservice/tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 15m#删除service/tomcat[root@k8s-node1 /]# kubectl delete service/tomcatservice "tomcat" deleted#还剩一个默认service 443安全连接端口[root@k8s-node1 /]# kubectl get allNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 117m#查看所有资源 [root@k8s-node1 /]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat-6cfdcff7d7-9g4n9 1/1 Running 0 6m17s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 111m service/tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 11m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat 1/1 1 1 47m NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat-6cfdcff7d7 1 1 1 47m #删除deployment.apps/tomcat [root@k8s-node1 /]# kubectl delete deployment.apps/tomcat deployment.apps "tomcat" deleted #查看剩余的资源 [root@k8s-node1 /]# kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 115m service/tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 15m #删除service/tomcat [root@k8s-node1 /]# kubectl delete service/tomcat service "tomcat" deleted #还剩一个默认service 443安全连接端口 [root@k8s-node1 /]# kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 117m#查看所有资源 [root@k8s-node1 /]# kubectl get all NAME READY STATUS RESTARTS AGE pod/tomcat-6cfdcff7d7-9g4n9 1/1 Running 0 6m17s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 111m service/tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 11m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/tomcat 1/1 1 1 47m NAME DESIRED CURRENT READY AGE replicaset.apps/tomcat-6cfdcff7d7 1 1 1 47m #删除deployment.apps/tomcat [root@k8s-node1 /]# kubectl delete deployment.apps/tomcat deployment.apps "tomcat" deleted #查看剩余的资源 [root@k8s-node1 /]# kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 115m service/tomcat NodePort 10.96.133.27 <none> 80:30059/TCP 15m #删除service/tomcat [root@k8s-node1 /]# kubectl delete service/tomcat service "tomcat" deleted #还剩一个默认service 443安全连接端口 [root@k8s-node1 /]# kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 117m
K8s细节
kubectl文档:https://kubernetes.io/zh/docs/reference/kubectl/overview
资源类型:https://kubernetes.io/zh/docs/reference/kubectl/overview/#%E8%B5%84%E6%BA%90%E7%B1%BB%E5%9E%8B
格式化输出:https://kubernetes.io/zh/docs/reference/kubectl/overview/#%E6%A0%BC%E5%BC%8F%E5%8C%96%E8%BE%93%E5%87%BA
“所有
kubectl
命令的默认输出格式都是人类可读的纯文本格式。要以特定格式向终端窗口输出详细信息,可以将-o
或--output
参数添加到受支持的kubectl
命令中。”
语法
kubectl [command] [TYPE] [NAME] -o=<output_format>kubectl [command] [TYPE] [NAME] -o=<output_format>kubectl [command] [TYPE] [NAME] -o=<output_format>
根据 kubectl
操作,支持以下输出格式:
Output format | Description |
---|---|
-o custom-columns= | 使用逗号分隔的 自定义列:https://kubernetes.io/zh/docs/reference/kubectl/overview/#custom-columns列表打印表。 |
-o custom-columns-file= | 使用 “ 文件中的 自定义列:https://kubernetes.io/zh/docs/reference/kubectl/overview/#custom-columns模板打印表。 |
-o json | 输出 JSON 格式的 API 对象 |
`-o jsonpath= | 打印 jsonpath https://kubernetes.io/docs/reference/kubectl/jsonpath 表达式定义的字段 |
-o jsonpath-file= | 打印 “ 文件中 jsonpath:https://kubernetes.io/docs/reference/kubectl/jsonpath/ 表达式定义的字段。 |
-o name | 仅打印资源名称而不打印任何其他内容。 |
-o wide | 以纯文本格式输出,包含任何附加信息。对于 pod 包含节点名。 |
-o yaml | 输出 YAML 格式的 API 对象。 |
示例
“
在此示例中,以下命令将单个 pod 的详细信息输出为 YAML 格式的对象:”
kubectl get pod web-pod-13je7 -o yaml1kubectl get pod web-pod-13je7 -o yaml 1kubectl get pod web-pod-13je7 -o yaml 1
“请记住:有关每个命令支持哪种输出格式的详细信息,请参阅 kubectl:https://kubernetes.io/docs/user-guide/kubectl 参考文档。”
–dry-run:
“–dry-run=‘none’: Must be “none”, “server”, or “client”. If client strategy, only print the object that would be
sent, without sending it. If server strategy, submit server-side request without persisting the resource.
值必须为none,server或client。如果是客户端策略,则只打印该发送对象,但不发送它。如果服务器策略,提交服务器端请求而不持久化资源。
也就是说,通过–dry-run选项,并不会真正的执行这条命令。”
# 让这个创建过程以yaml打印root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yamlW0504 03:39:08.389369 8107 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.apiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: tomcat6name: tomcat6spec:replicas: 1selector:matchLabels:app: tomcat6strategy: {}template:metadata:creationTimestamp: nulllabels:app: tomcat6spec:containers:- image: tomcat:6.0.53-jre8name: tomcatresources: {}status: {}#输出到tomcat6.yaml[root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6.yaml# 让这个创建过程以yaml打印 root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml W0504 03:39:08.389369 8107 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client. apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: tomcat6 name: tomcat6 spec: replicas: 1 selector: matchLabels: app: tomcat6 strategy: {} template: metadata: creationTimestamp: null labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat resources: {} status: {} #输出到tomcat6.yaml [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6.yaml# 让这个创建过程以yaml打印 root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml W0504 03:39:08.389369 8107 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client. apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: tomcat6 name: tomcat6 spec: replicas: 1 selector: matchLabels: app: tomcat6 strategy: {} template: metadata: creationTimestamp: null labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat resources: {} status: {} #输出到tomcat6.yaml [root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6.yaml
完整用yaml部署案例
将yaml输出到文件中
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml > tomcat-deployment.yamlkubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml > tomcat-deployment.yamlkubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml > tomcat-deployment.yaml
# 编辑后apiVersion: apps/v1kind: Deployment # 是一次部署metadata:labels:app: tomcat6name: tomcat6spec:replicas: 3 # 生成三个副本selector:matchLabels:app: tomcat6template:metadata:labels:app: tomcat6spec:containers:- image: tomcat:6.0.53-jre8 # 使用镜像name: tomcat# 编辑后 apiVersion: apps/v1 kind: Deployment # 是一次部署 metadata: labels: app: tomcat6 name: tomcat6 spec: replicas: 3 # 生成三个副本 selector: matchLabels: app: tomcat6 template: metadata: labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 # 使用镜像 name: tomcat# 编辑后 apiVersion: apps/v1 kind: Deployment # 是一次部署 metadata: labels: app: tomcat6 name: tomcat6 spec: replicas: 3 # 生成三个副本 selector: matchLabels: app: tomcat6 template: metadata: labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 # 使用镜像 name: tomcat
暴露访问
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yamlkubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yamlkubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yaml
将两个yaml合成一个
apiVersion: apps/v1kind: Deploymentmetadata:labels:app: tomcat6name: tomcat6spec:replicas: 3selector:matchLabels:app: tomcat6template:metadata:labels:app: tomcat6spec:containers:- image: tomcat:6.0.53-jre8name: tomcat---apiVersion: v1kind: Servicemetadata:labels:app: tomcat6name: tomcat6spec:ports:- port: 80protocol: TCPtargetPort: 8080selector:app: tomcat6type: NodePortapiVersion: apps/v1 kind: Deployment metadata: labels: app: tomcat6 name: tomcat6 spec: replicas: 3 selector: matchLabels: app: tomcat6 template: metadata: labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat --- apiVersion: v1 kind: Service metadata: labels: app: tomcat6 name: tomcat6 spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: tomcat6 type: NodePortapiVersion: apps/v1 kind: Deployment metadata: labels: app: tomcat6 name: tomcat6 spec: replicas: 3 selector: matchLabels: app: tomcat6 template: metadata: labels: app: tomcat6 spec: containers: - image: tomcat:6.0.53-jre8 name: tomcat --- apiVersion: v1 kind: Service metadata: labels: app: tomcat6 name: tomcat6 spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: tomcat6 type: NodePort
执行yaml部署并暴露服务
[root@k8s-node1 ~]# kubectl apply -f tomcat-deployment.yamldeployment.apps/tomcat6 createdservice/tomcat6 created[root@k8s-node1 ~]# kubectl apply -f tomcat-deployment.yaml deployment.apps/tomcat6 created service/tomcat6 created[root@k8s-node1 ~]# kubectl apply -f tomcat-deployment.yaml deployment.apps/tomcat6 created service/tomcat6 created
Ingress
“通过 Service发现Pod进行关联。基于域名访问
通过 Ingress Controller实现Pod负载均衡
支持TCP/UDP4层负载均衡和HTTP7层负载均衡”
步骤
部署 Ingress Controller
kubectl apply -f ingress-controller.yamlkubectl apply -f ingress-controller.yamlkubectl apply -f ingress-controller.yaml
创建Ingress规则
apiVersion: extensions/v1beta1kind: Ingressmetadata:name: webspec:rules:- host: tomcat6.kubenetes.com # 域名http:paths:- backend:serviceName: tomcat6 # Service暴露的nameservicePort: 80 # 通过那个端口apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web spec: rules: - host: tomcat6.kubenetes.com # 域名 http: paths: - backend: serviceName: tomcat6 # Service暴露的name servicePort: 80 # 通过那个端口apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web spec: rules: - host: tomcat6.kubenetes.com # 域名 http: paths: - backend: serviceName: tomcat6 # Service暴露的name servicePort: 80 # 通过那个端口
应用规则
[root@k8s-node1 ~]# kubectl apply -f ingress-tomcat6.yamlingress.extensions/web created[root@k8s-node1 ~]# kubectl apply -f ingress-tomcat6.yaml ingress.extensions/web created[root@k8s-node1 ~]# kubectl apply -f ingress-tomcat6.yaml ingress.extensions/web created
配置本机的域名映射
[root@k8s-node1 ~]# kubectl apply -f ingress-tomcat6.yamlingress.extensions/web created[root@k8s-node1 ~]# kubectl apply -f ingress-tomcat6.yaml ingress.extensions/web created[root@k8s-node1 ~]# kubectl apply -f ingress-tomcat6.yaml ingress.extensions/web created
“测试: http://tomcat6.kubenetes.com/
并且集群中即便有一个节点不可用,也不影响整体的运行。”
安装kubernetes可视化界面——DashBoard
kubectl apply -f kubernetes-dashboard.yamlkubectl apply -f kubernetes-dashboard.yamlkubectl apply -f kubernetes-dashboard.yaml
暴露DashBoard为公共访问
“默认DashBoard只能集群内部访问,修改Service为NodePort类型,暴露到外部”
kind: ServiceapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kube-systemspec:type: NodePortports:- port: 443targetPort: 8443nodePort: 3001selector:k8s-app: kubernetes-dashboardkind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 3001 selector: k8s-app: kubernetes-dashboardkind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 3001 selector: k8s-app: kubernetes-dashboard
访问地址:http://NodeIP:30001
创建授权账号
$ kubectl create serviceaccount dashboar-admin -n kube-sysem$ kubectl create serviceaccount dashboar-admin -n kube-sysem$ kubectl create serviceaccount dashboar-admin -n kube-sysem
$ kubectl create clusterrolebinding dashboar-admin --clusterrole=cluter-admin --serviceaccount=kube-system:dashboard-admin$ kubectl create clusterrolebinding dashboar-admin --clusterrole=cluter-admin --serviceaccount=kube-system:dashboard-admin$ kubectl create clusterrolebinding dashboar-admin --clusterrole=cluter-admin --serviceaccount=kube-system:dashboard-admin
$ kubectl describe secrets -n kube-system $( kubectl -n kube-system get secret |awk '/dashboard-admin/{print $1}' )$ kubectl describe secrets -n kube-system $( kubectl -n kube-system get secret |awk '/dashboard-admin/{print $1}' )$ kubectl describe secrets -n kube-system $( kubectl -n kube-system get secret |awk '/dashboard-admin/{print $1}' )
使用输出的token登录dashboard
KubeSphere(安装更强大的可视化界面)
“默认的 dashboard没啥用,我们用 kubesphere可以打通全部的 devops链路。
Kubesphere集成了很多套件,集群要求较高”
官网:https://kubesphere.io
Kuboard也不错,非常轻量级的可视化界面,集群要求不高
官网:https://kuboard.cn
“Kube Sphere是一款面向云原生设计的开源项目,在目前主流容器调度平台 Kubernetes之
上构建的分布式多租户容器管理平台,提供简单易用的操作界面以及向导式操作方式,在降
低用户使用容器调度平合学习成本的同时,极大降低开发、测试、运维的日常工作的复杂度。”
安装
“注意他的前提条件⚠️⚠️”
KubeSphere安装文档:https://kubesphere.com.cn/docs/zh-CN/installation/prerequisites/
安装Helm
“helm是kubernetes的包管理器。包管理器类似于在Ubuntu中使用的apt,centos中的yum或者python中的pip一样,能够快速查找,下载和安装软件包。Helm有客户端组件helm和服务端组件Tiller组成,能够将一组K8S资源打包统一管理,是查找、共享和使用为Kubernetes构建的软件的最佳方式。”
curl -L https://git.io/get_helm.sh | bashcurl -L https://git.io/get_helm.sh | bashcurl -L https://git.io/get_helm.sh | bash
验证版本
helm versionhelm versionhelm version
创建权限(master执行)
helm-rbac.yaml
apiVersion: v1kind: ServiceAccountmetadata:name: tillernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: tillerroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects:- kind: ServiceAccountname: tillernamespace: kube-systemapiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-systemapiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
应用配置
[root@k8s-node1 k8s]# kubectl apply -f helm-rbac.ymlserviceaccount/tiller createdclusterrolebinding.rbac.authorization.k8s.io/tiller created[root@k8s-node1 k8s]# kubectl apply -f helm-rbac.yml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created[root@k8s-node1 k8s]# kubectl apply -f helm-rbac.yml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
安装Tilller(Master执行)
初始化 ⚠️⚠️自己的版本与helm一样
# google源helm init --service-account tiller --upgrade -i gcr.io/kubernetes-helm/tiller:v2.16.2# 阿里源helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.2 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts# 验证 Tillerkubectl get pods --namespace kube-system | grep tiller# 删除Tillerkubectl delete deployment tiller-deploy --namespace kube-system# google源 helm init --service-account tiller --upgrade -i gcr.io/kubernetes-helm/tiller:v2.16.2 # 阿里源 helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.2 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts # 验证 Tiller kubectl get pods --namespace kube-system | grep tiller # 删除Tiller kubectl delete deployment tiller-deploy --namespace kube-system# google源 helm init --service-account tiller --upgrade -i gcr.io/kubernetes-helm/tiller:v2.16.2 # 阿里源 helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.2 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts # 验证 Tiller kubectl get pods --namespace kube-system | grep tiller # 删除Tiller kubectl delete deployment tiller-deploy --namespace kube-system
Kubernetes helm配置国内镜像源
# 删除默认的源helm repo remove stable# 增加新的国内镜像源helm repo add stable https://burdenbear.github.io/kube-charts-mirror/# 或者helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts# 默认的helm repo add stable https://kubernetes-charts.storage.googleapis.com# 查看helm源添加情况helm repo list# 删除默认的源 helm repo remove stable # 增加新的国内镜像源 helm repo add stable https://burdenbear.github.io/kube-charts-mirror/ # 或者 helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts # 默认的 helm repo add stable https://kubernetes-charts.storage.googleapis.com # 查看helm源添加情况 helm repo list# 删除默认的源 helm repo remove stable # 增加新的国内镜像源 helm repo add stable https://burdenbear.github.io/kube-charts-mirror/ # 或者 helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts # 默认的 helm repo add stable https://kubernetes-charts.storage.googleapis.com # 查看helm源添加情况 helm repo list
踩坑
如果用helm list 报错failed to list: configmaps is forbidden: User “system:serviceaccount:kube-system:default” cannot list configmaps in the namespace “kube-system”
执行以下命令创建 serviceaccount tiller 并且给它集群管理权限
kubectl create serviceaccount --namespace kube-system tillerkubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tillerkubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
安装 OpenEBS
“因为大部分都是quay.io没有墙下不了于是我就改成了docker的镜像”
⚠️⚠️⚠️安装完kubesphere才可以打污点⚠️⚠️⚠️
命令总结
# helm删除命名空间内容helm del --purge openebshelm ls --all openebs # 查看命名空间内容# kubectl删除命名空间kubectl delete namespaces openebs# helm删除命名空间内容 helm del --purge openebs helm ls --all openebs # 查看命名空间内容 # kubectl删除命名空间 kubectl delete namespaces openebs# helm删除命名空间内容 helm del --purge openebs helm ls --all openebs # 查看命名空间内容 # kubectl删除命名空间 kubectl delete namespaces openebs
Run: helm ls –all openebs; to check the status of the release
Or run: helm del –purge openebs; to delete it