博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
kubernetes-部署(单机,使用证书)
阅读量:4614 次
发布时间:2019-06-09

本文共 26186 字,大约阅读时间需要 87 分钟。

配置环境说明:

  • 服务器配置说明
hostname use os ip
test10 DNS+docker_hub centos7 10.10.10.10
test11 master+node centos7 10.10.10.11
test12 node centos7 10.10.10.12
  • 业务-分布说明
服务器名 OS IP 运行程序组件名 组件说明
test10 centos7 10.10.10.10 dnsmasq DNS域名解析
test10 centos7 10.10.10.10 harbor docker镜像管理工具
test10 centos7 10.10.10.10 flannel 管理docker及宿主机-网络工具
test11 centos7 10.10.10.11 etcd 保存系统的配置信息和各种资源的状态信息
test11 centos7 10.10.10.11 flannel 管理docker及宿主机-网络工具
test11 centos7 10.10.10.11 kube-apiserver 提供Restful api。各种客户端工具或者其他组件可以调用其完成资源调用
test11 centos7 10.10.10.11 kube-scheduler 调度服务,决定将容器创建在哪个Node上
test11 centos7 10.10.10.11 kube-controller-manager 管理系统中各种资源,保证资源处于预期的状态
test11 centos7 10.10.10.11 kubectl 集群中一个重要且便捷的管理工具
test11 centos7 10.10.10.11 kubelet 接收Master节点发来的创建请求信息,并向Master报告运行状态。
test11 centos7 10.10.10.11 kube-proxy 访问控制
test12 centos7 10.10.10.12 flannel 管理docker及宿主机-网络工具
test12 centos7 10.10.10.12 kubectl 集群中一个重要且便捷的管理工具
test12 centos7 10.10.10.12 kubelet 接收Master节点发来的创建请求信息,并向Master报告运行状态。
test12 centos7 10.10.10.12 kube-proxy 访问控制
  • 安装顺序
安装序号 程序名称 对应服务器
1 dnsmasq test10
2 docker_hub-harbor test10
3 etcd test11
4 flannel test10,test11,test12
5 kube-apiserver test11
6 kube-scheduler test11
7 kube-controller-manager test11
8 kubectl test11,test12
9 kubelet test11,test12
10 kube-proxy test11,test12

kuberneter版本: 1.14.0

安装方式: 二进制文件
认证方式:key
单管理节点服务


预配置

配置服务器(所有服务器)

  • 关闭所有服务器selinux
setenforce 0sed -i "s/^SELINUX\=.*/SELINUX=disabled/g" /etc/selinux/config
  • 关闭所有服务器防火墙
systemctl stop firewalld.servicesystemctl disable firewalld.service
  • 关闭所有服务器swap
swapoff -a##注释 /etc/fstab 中swap启动项
  • 下载日常插件并设置yum源文件为阿里云
yum -y install epel-release wget yum-utils device-mapper-persistent-data lvm2mkdir -p /etc/yum.repos.d/bakmv /etc/yum.repos.d/* /etc/yum.repos.d/bakwget http://mirrors.aliyun.com/repo/Centos-7.repo -P /etc/yum.repos.d/wget http://mirrors.aliyun.com/repo/epel-7.repo -P /etc/yum.repos.d/yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum -y install wget vim ntp unzip zip net-snmp* telnet lrzsz bash-completion net-tools ntpdate supervisor
  • 安装docker-ce
##安装yum -y install install docker-ce docker-ce-cli containerd.io##配置阿里云docker源mkdir -p /etc/dockercat >> /etc/docker/daemon.json <<-'EOF'{  "registry-mirrors": ["https://hh3tvdpc.mirror.aliyuncs.com"]}EOFsystemctl daemon-reload
  • 同步时间
ntpdate ntp1.aliyun.comecho '*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com > /dev/null 2>&1' >> /var/spool/cron/root
  • 修改主机名及hosts
hostname test10hostnamectl set-hostname test10
  • 配置内核参数
cat >> /etc/sysctl.conf <
> /etc/rc.localsysctl -p /etc/sysctl.conf
  • 服务器SSH免密
##master机器ssh-keygen##授信其他服务器ssh-copy-id 10.10.10.10ssh-copy-id 10.10.10.11ssh-copy-id 10.10.10.12

配置证书

  • 下载并安装证书生成工具
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfsslwget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljsonmkdir -p /data/k8s_keycd /data/k8s_key

需创建证书如下:

ca证书
etcd证书
apiserver证书
proxy证书
kubectl证书

  • 生成ca证书
cat > ca-config.json << EOF{  "signing": {    "default": {      "expiry": "876000h"    },    "profiles": {      "kubernetes": {        "usages": [            "signing",            "key encipherment",            "server auth",            "client auth"        ],        "expiry": "876000h"      }    }  }}EOFcat > ca-csr.json << EOF{  "CN": "kubernetes",  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "JS",      "L": "NJ",      "O": "k8s",      "OU": "system"    }  ]}EOFcfssl gencert -initca ca-csr.json | cfssljson -bare ca

根据需求修改 'C' , 'ST' , 'L' , 'O' , 'OU' 值

"CN":Common Name,kube-apiserver从证书中提取该字段作为请求的用户名(User name);浏览器检验该字段验证网站是否合法;
“O”:Organization,kube-apiserver从证书提取该字段作为请求用户所属的组(Group);

  • 生成etcd证书
cat > etcd-csr.json << EOF{  "CN": "etcd",  "hosts": [    "127.0.0.1",    "10.10.10.11",    "etcd.k8s.test"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "JS",      "L": "NJ",      "O": "k8s",      "OU": "system"    }  ]}EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

'hosts'为必填项,根据ETCD程序实际部署IP地址及域名进行填写,否则证书校验会出错

  • 生成apiserver证书
cat > apiserver-csr.json << EOF{  "CN": "apiserver",  "hosts": [    "127.0.0.1",    "10.10.10.11",    "apiserver.k8s.test"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "JS",      "L": "NJ",      "O": "k8s",      "OU": "system"    }  ]}EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver

'hosts'为必填项,根据apiserver程序实际部署IP地址及域名进行填写,否则证书校验会出错

  • 生成proxy证书
cat > proxy-csr.json << EOF{  "CN": "proxy",  "hosts": [],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "JS",      "L": "NJ",      "O": "k8s",      "OU": "system"    }  ]}EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes proxy-csr.json | cfssljson -bare proxy

'hosts'可以为空,集群中增加新节点也不需要重新生成证书

  • 生成kubectl证书
cat > kubectl-csr.json << EOF{  "CN": "kubectl",  "hosts": [],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "JS",      "L": "NJ",      "O": "k8s",      "OU": "system"    }  ]}EOFcfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubectl-csr.json | cfssljson -bare kubectl
  • 将证书同步到其他服务器
ssh 10.10.10.10 "mkdir -p /data/k8s_key"scp -r /data/k8s_key 10.10.10.10:/data/ssh 10.10.10.12 "mkdir -p /data/k8s_key"scp -r /data/k8s_key 10.10.10.12:/data/

部署dnsmasq

test10

  • 安装
yum -y install dnsmasq
  • 修改配置文件
cp /etc/dnsmasq.conf /etc/dnsmasq.conf.bakcat > /etc/dnsmasq.conf <
  • 创建相关配置文件及文件夹
mkdir -p /data/dnsmasq/{dnsmasq.d,log}touch /data/dnsmasq/{dnsmasq.hosts,resolv.dnsmasq}
  • 填写DNS转发服务器(提供非自定义域名查询)
cat > /data/dnsmasq/resolv.dnsmasq << EOFnameserver 223.5.5.5nameserver 1.2.4.8EOF
  • 填写hosts主机记录(提供域名hosts记录集中查询)
cat > /data/dnsmasq/dnsmasq.hosts << EOF10.10.10.10 test1010.10.10.11 test1110.10.10.12 test12EOF

修改addn-hosts指定hosts记录文件,需重启dnsmasq,可以通过hostsdir指定域名配置文件添加解析。

  • 填写自定义域名(提供内网自定义域名查询)
cat > /data/dnsmasq/dnsmasq.d/k8s.tes <
  • 启动服务并设置开机启动
systemctl start dnsmasq.servicesystemctl enable dnsmasq.service
  • 所有服务器设置DNS指向10.10.10.10
#修改配置项 /etc/sysconfig/network-scripts/ifcfg-eth0PEERDNS=no  #拒绝接受DHCP分发的DNS配置DNS1=10.10.10.10  #自定义配置DNS服务器地址 #重启网络配置systemctl restart network.service


部署etcd

test11

  • 下载、安装
mkdir -p /setup/ /opt/etcd/{bin,conf} /data/etcd/wget https://github.com/coreos/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz -P /setup/cd /setup/tar zxvf etcd-v3.3.13-linux-amd64.tar.gzmv etcd-v3.3.13-linux-amd64/etcd* /opt/etcd/bin/chmod +x /opt/etcd/bin/etcd*ln -s /opt/etcd/bin/etcd /usr/bin/ln -s /opt/etcd/bin/etcdctl /usr/bin/
  • 创建配置文件 /opt/etcd/conf/etcd.conf
ETCD_CONF='--name test11 \--data-dir /data/etcd \--listen-peer-urls https://0.0.0.0:2380 \--listen-client-urls https://0.0.0.0:2379 \--advertise-client-urls https://10.10.10.11:2379 \--initial-cluster-token etcd-cluster-0 \--initial-cluster-state new \--initial-advertise-peer-urls https://10.10.10.11:2380 \--initial-cluster test11=https://10.10.10.11:2380 \--client-cert-auth \--trusted-ca-file /data/k8s_key/ca.pem \--cert-file /data/k8s_key/etcd.pem \--key-file /data/k8s_key/etcd-key.pem \--peer-client-cert-auth \--peer-trusted-ca-file /data/k8s_key/ca.pem \--peer-cert-file /data/k8s_key/etcd.pem \--peer-key-file /data/k8s_key/etcd-key.pem'
  • 创建启动文件 /usr/lib/systemd/system/etcd.service
[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=-/opt/etcd/conf/etcd.confExecStart=/opt/etcd/bin/etcd $ETCD_CONFRestart=on-failureRestartSec=2LimitNOFILE=65536[Install]WantedBy=multi-user.target
  • 启动etcd并设置为开机启动
systemctl daemon-reloadsystemctl start etcd.servicesystemctl enable etcd.service
  • 查看部署情况
#查看成员状态etcdctl --ca-file=/data/k8s_key/ca.pem --cert-file=/data/k8s_key/etcd.pem --key-file=/data/k8s_key/etcd-key.pem --endpoints=https://127.0.0.1:2379 member list#查看集群状态etcdctl --ca-file=/data/k8s_key/ca.pem --cert-file=/data/k8s_key/etcd.pem --key-file=/data/k8s_key/etcd-key.pem --endpoints=https://127.0.0.1:2379 cluster-health

部署harbor (docker镜像仓库,提供项目docker镜像托管及版本管理)

test10

  • 下载
yum -y install docker-compose.noarchmkdir -p /setup /opt/harbor/wget https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.1.tgz -P /setuptar xvf harbor-offline-installer-v1.8.1.tgz -C /opt/
  • 修改配置文件
##修改配置文件 /opt/harbor/harbor.ymlhostname: harbor.k8s.test  #侦听域名harbor_admin_password: admin123  #管理平台admin管理员密码
  • 安装Harbor
cd /opt/harbor/./install.sh

如有保存,按照错误提醒解决问题。(一般为缺少依赖,或者依赖版本不匹配)

  • 访问harbor管理界面
#访问url  http://harbor.k8s.test#用户名/密码  admin / admin123
  • 修改docker配置,访问harbor http模式(配置在上传docker镜像机器中)
##修改配置文件 /etc/docker/daemon.json{  "registry-mirrors": ["https://hh3tvdpc.mirror.aliyuncs.com"],  "insecure-registries":["harbor.k8s.test"]}##重启dockersystemctl restart docker.service##docker登录仓库账户docker login harbor.k8s.testadminadmin123
  • 上传镜像步骤
##管理界面中创建项目 dockertest##将已准备的镜像包使用tag转成私网镜像包docker tag SOURCE_IMAGE[:TAG] harbor.k8s.test/dockertest/IMAGE[:TAG]docker tag SOURCE_IMAGE[:TAG] harbor.k8s.test/dockertest/IMAGE:latest##上传镜像包(所有)docker push harbor.k8s.test/dockertest/IMAGE##管理界面中查看dockertest 项目中是否有已上传镜像

部署flannel(所有服务器都部署),并修改docker-ce配置项(使用flannel网络)

test10、test11、test12

test10的docker不用修改启动项。

  • 下载、安装
mkdir -p /setup/ /opt/flannel/{bin,conf} /data/flannelwget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz -P /setup/cd /setup/tar zxvf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel/bin/chmod +x /opt/flannel/bin/*ln -s /opt/flannel/bin/flanneld /usr/bin/
  • 向etcd中注册网段信息

    master机器执行一遍即可

/opt/etcd/bin/etcdctl \--ca-file=/data/k8s_key/ca.pem \--cert-file=/data/k8s_key/etcd.pem \--key-file=/data/k8s_key/etcd-key.pem \--endpoints=https://127.0.0.1:2379 \set /test_1/network/config '{"Network": "172.30.0.0/16","SubnetLen": 24, "SubnetMin": "172.30.1.0","SubnetMax": "172.30.20.0", "Backend": {"Type": "vxlan"}}'

注册使用网段172.30.0.0/16,每个子网分配子网掩码24位,网络分配从172.30.1.0/24分配到172.30.20.0/24,协议使用vxlan

  • 创建配置文件 /opt/flannel/conf/flannel.conf
FLANNEL_CONF="-etcd-cafile=/data/k8s_key/ca.pem \-etcd-certfile=/data/k8s_key/etcd.pem \-etcd-keyfile=/data/k8s_key/etcd-key.pem \-etcd-endpoints=https://etcd.k8s.test:2379 \-etcd-prefix=/test_1/network"
  • 创建启动文件 /usr/lib/systemd/system/flannel.service
[Unit]Description=Flanneld overlay address etcd agentAfter=etcd.serviceBefore=docker.service[Service]Type=notifyEnvironmentFile=-/opt/flannel/conf/flannel.confExecStart=/opt/flannel/bin/flanneld $FLANNEL_CONFExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/dockerRestart=on-failure[Install]WantedBy=multi-user.target
  • 启动flannel并设置为开机启动
systemctl daemon-reloadsystemctl start flannel.servicesystemctl enable flannel.service
  • 检测启动状态
#查看etcd中注册信息etcdctl -ca-file=/data/k8s_key/ca.pem --cert-file=/data/k8s_key/etcd.pem --key-file=/data/k8s_key/etcd-key.pem --endpoints=https://etcd.k8s.test:2379 ls -r /test_1/network/subnets/etcdctl -ca-file=/data/k8s_key/ca.pem --cert-file=/data/k8s_key/etcd.pem --key-file=/data/k8s_key/etcd-key.pem --endpoints=https://etcd.k8s.test:2379 get /test_1/network/subnets/172.30.5.0-24#查看本地网络情况ip add#查看docker网络配置文件cat /run/flannel/docker
  • 修改docker启动文件(使用flannel网络)

    /usr/lib/systemd/system/docker.service

[Service]#修改ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS#新增EnvironmentFile=/run/flannel/docker
  • 启动docker并设置为开机启动
systemctl daemon-reloadsystemctl start docker.servicesystemctl enable docker.service
  • 检查docker网络
##检查本地网络情况ip a show docker0##查看docker的配置docker inspect bridge

部署kube-apiserver

test11

  • 下载、安装
mkdir /setup && cd /setupwget https://dl.k8s.io/v1.14.0/kubernetes-server-linux-amd64.tar.gz -P /setuptar zxvf kubernetes-server-linux-amd64.tar.gzmkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/cp kubernetes/server/bin/kube-apiserver /opt/kubernetes/bin/chmod +x /opt/kubernetes/bin/kube-apiserver
  • 创建token.csv文件
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')cat > /opt/kubernetes/conf/token.csv <
  • 创建高级审计配置文件 /opt/kubernetes/conf/audit-policy.yaml
apiVersion: audit.k8s.io/v1 # This is required.kind: Policy# Don't generate audit events for all requests in RequestReceived stage.omitStages:  - "RequestReceived"rules:  # Log pod changes at RequestResponse level  - level: RequestResponse    resources:    - group: ""      # Resource "pods" doesn't match requests to any subresource of pods,      # which is consistent with the RBAC policy.      resources: ["pods"]  # Log "pods/log", "pods/status" at Metadata level  - level: Metadata    resources:    - group: ""      resources: ["pods/log", "pods/status"]  # Don't log requests to a configmap called "controller-leader"  - level: None    resources:    - group: ""      resources: ["configmaps"]      resourceNames: ["controller-leader"]  # Don't log watch requests by the "system:kube-proxy" on endpoints or services  - level: None    users: ["system:kube-proxy"]    verbs: ["watch"]    resources:    - group: "" # core API group      resources: ["endpoints", "services"]  # Don't log authenticated requests to certain non-resource URL paths.  - level: None    userGroups: ["system:authenticated"]    nonResourceURLs:    - "/api*" # Wildcard matching.    - "/version"  # Log the request body of configmap changes in kube-system.  - level: Request    resources:    - group: "" # core API group      resources: ["configmaps"]    # This rule only applies to resources in the "kube-system" namespace.    # The empty string "" can be used to select non-namespaced resources.    namespaces: ["kube-system"]  # Log configmap and secret changes in all other namespaces at the Metadata level.  - level: Metadata    resources:    - group: "" # core API group      resources: ["secrets", "configmaps"]  # Log all other resources in core and extensions at the Request level.  - level: Request    resources:    - group: "" # core API group    - group: "extensions" # Version of group should NOT be included.  # A catch-all rule to log all other requests at the Metadata level.  - level: Metadata    # Long-running requests like watches that fall under this rule will not    # generate an audit event in RequestReceived.    omitStages:      - "RequestReceived"
  • 创建配置文件 /opt/kubernetes/conf/kube-apiserver.conf
KUBE_APISERVER="--apiserver-count=1 \--logtostderr=false \--audit-log-path=/data/kubernetes/logs/kube-apiserver.log \--audit-policy-file=/opt/kubernetes/conf/audit-policy.yaml \--v=4 \--bind-address=10.10.10.11 \--secure-port=6443 \--advertise-address=10.10.10.11 \--allow-privileged=true \--authorization-mode=Node,RBAC \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/conf/token.csv \--client-ca-file=/data/k8s_key/ca.pem \--requestheader-client-ca-file=/data/k8s_key/ca.pem \--etcd-cafile=/data/k8s_key/ca.pem \--etcd-certfile=/data/k8s_key/apiserver.pem \--etcd-keyfile=/data/k8s_key/apiserver-key.pem \--etcd-servers=https://etcd.k8s.test:2379 \--service-account-key-file=/data/k8s_key/ca-key.pem \--service-cluster-ip-range=10.254.0.0/16 \--service-node-port-range=30000-50000 \--kubelet-client-certificate=/data/k8s_key/apiserver.pem \--kubelet-client-key=/data/k8s_key/apiserver-key.pem \--tls-cert-file=/data/k8s_key/apiserver.pem \--tls-private-key-file=/data/k8s_key/apiserver-key.pem"
  • 创建启动文件 /usr/lib/systemd/system/kube-apiserver.service
[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetAfter=etcd.service[Service]Type=notifyEnvironmentFile=-/opt/kubernetes/conf/kube-apiserver.confExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVERRestart=on-failureRestartSec=2LimitNOFILE=65536[Install]WantedBy=multi-user.target
  • 启动kube-apiserver并设置为开机启动
systemctl daemon-reloadsystemctl start kube-apiserver.servicesystemctl enable kube-apiserver.service

部署kube-scheduler

test11

  • 下载、安装
cd /setupwget https://dl.k8s.io/v1.14.0/kubernetes-server-linux-amd64.tar.gz -P /setuptar zxvf kubernetes-server-linux-amd64.tar.gzmkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/cp kubernetes/server/bin/kube-scheduler /opt/kubernetes/bin/chmod +x /opt/kubernetes/bin/kube-scheduler
  • 创建配置文件 /opt/kubernetes/conf/kube-scheduler.conf
KUBE_SCHEDULER="--leader-elect \--logtostderr=false \--log-dir=/data/kubernetes/logs/ \--log-file=/data/kubernetes/logs/kube-scheduler.log \--v=4 \--master=http://127.0.0.1:8080"
  • 创建启动文件 /usr/lib/systemd/system/kube-scheduler.service
[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetAfter=kube-apiserver.service[Service]Type=simpleEnvironmentFile=-/opt/kubernetes/conf/kube-scheduler.confExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULERRestart=on-failureRestartSec=2LimitNOFILE=65536[Install]WantedBy=multi-user.target
  • 启动kube-scheduler并设置为开机启动
systemctl daemon-reloadsystemctl start kube-scheduler.servicesystemctl enable kube-scheduler.service

部署kube-controller-manager

test11

  • 下载、安装
cd /setupwget https://dl.k8s.io/v1.14.0/kubernetes-server-linux-amd64.tar.gz -P /setuptar zxvf kubernetes-server-linux-amd64.tar.gzmkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/cp kubernetes/server/bin/kube-controller-manager /opt/kubernetes/bin/chmod +x /opt/kubernetes/bin/kube-controller-manager
  • 创建配置文件 /opt/kubernetes/conf/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER="--logtostderr=false \--log-dir=/data/kubernetes/logs/ \--log-file=/data/kubernetes/logs/kube-controller-manager.log \--v=4 \--master=http://127.0.0.1:8080 \--leader-elect=true \--address=0.0.0.0 \--service-cluster-ip-range=10.254.0.0/16 \--cluster-name=kubernetes \--cluster-signing-cert-file=/data/k8s_key/ca.pem \--cluster-signing-key-file=/data/k8s_key/ca-key.pem  \--root-ca-file=/data/k8s_key/ca.pem \--service-account-private-key-file=/data/k8s_key/ca-key.pem"
  • 创建启动文件 /usr/lib/systemd/system/kube-controller-manager.service
[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetAfter=kube-apiserver.service[Service]Type=simpleEnvironmentFile=-/opt/kubernetes/conf/kube-controller-manager.confExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGERRestart=on-failureRestartSec=2LimitNOFILE=65536[Install]WantedBy=multi-user.target
  • 启动kube-controller-manager并设置为开机启动
systemctl daemon-reloadsystemctl start kube-controller-manager.servicesystemctl enable kube-controller-manager.service

master配置kubectl

test11

  • 下载、安装
cd /setupwget https://dl.k8s.io/v1.14.0/kubernetes-server-linux-amd64.tar.gz -P /setuptar zxvf kubernetes-server-linux-amd64.tar.gzmkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/cp kubernetes/server/bin/kubectl /opt/kubernetes/bin/chmod +x /opt/kubernetes/bin/kubectlln -s /opt/kubernetes/bin/kubectl /usr/sbin/
  • 查看master集群状态
kubectl get cs,nodesNAME                                 STATUS    MESSAGE             ERRORcomponentstatus/scheduler            Healthy   ok                  componentstatus/controller-manager   Healthy   ok                  componentstatus/etcd-0               Healthy   {"health":"true"}

node服务器使用kubectl

test12

kubectl默认通过localhost:8080连接apiserver 。所以需要配置kubectl使用证书访问api接口。

  • 下载、安装
cd /setupwget https://dl.k8s.io/v1.14.0/kubernetes-node-linux-amd64.tar.gz -P /setuptar zxvf kubernetes-node-linux-amd64.tar.gzmkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/cp kubernetes/node/bin/kubectl /opt/kubernetes/bin/chmod +x /opt/kubernetes/bin/kubectlln -s /opt/kubernetes/bin/kubectl /usr/sbin/
  • 在master节点上添加kubectl用户
##创建资源配置文件(添加授权kubectl用户) kubectl.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: kubectlroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:nodesubjects:- apiGroup: rbac.authorization.k8s.io  kind: User  name: kubectl  ##应用资源配置文件kubectl create -f kubectl.yaml
  • node节点增加配置文件
> 创建/root/.kube/config文件# 设置集群参数,--server指定Master节点ipkubectl config set-cluster kubernetes \--certificate-authority=/data/k8s_key/ca.pem \--server=https://apiserver.k8s.test:6443# 设置客户端认证参数kubectl config set-credentials kubectl \--certificate-authority=/data/k8s_key/ca.pem \--client-certificate=/data/k8s_key/kubectl.pem \--client-key=/data/k8s_key/kubectl-key.pem# 设置上下文参数kubectl config set-context default \  --cluster=kubernetes \  --user=kubectl# 设置默认上下文kubectl config use-context default
  • node节点上验证
##查看配置文件cat /root/.kube/config##使用命令查看kubectl get node

部署kubelet

test11、test12

  • 下载、部署
cd /setupwget https://dl.k8s.io/v1.14.0/kubernetes-node-linux-amd64.tar.gz -P /setuptar zxvf kubernetes-node-linux-amd64.tar.gzmkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/cp kubernetes/node/bin/{kubelet,kubectl} /opt/kubernetes/bin/chmod +x /opt/kubernetes/bin/*ln -s /opt/kubernetes/bin/kubectl /usr/sbin/
  • 创建bootstrap.kubeconfig.(master机器上执行)
# 设置集群参数kubectl config set-cluster kubernetes \--certificate-authority=/data/k8s_key/ca.pem \--embed-certs=true \--server="https://apiserver.k8s.test:6443" \--kubeconfig=bootstrap.kubeconfig# 设置客户端认证参数 (token值必须对应apiserver中token.csv中token值)kubectl config set-credentials kubelet-bootstrap \--token=2aa4a30846d730e004da12aaa5b43142 \--kubeconfig=bootstrap.kubeconfig# 设置上下文参数kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=bootstrap.kubeconfig# 设置默认上下文并生成文件kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
  • 将kubelet-bootstrap用户绑定到系统集群角色 (master机器上执行)
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  • 将bootstrap.kubeconfig复制到配置文件目录并scp到所有node服务器
cp bootstrap.kubeconfig /opt/kubernetes/confscp /opt/kubernetes/conf/bootstrap.kubeconfig test12:/opt/kubernetes/conf/
  • 创建kubelet参数配置模板文件 /opt/kubernetes/conf/kubelet.config
kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 10.10.10.12port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.10.10.10"]clusterDomain: cluster.local.failSwapOn: falseauthentication:  anonymous:    enabled: true

address 本机IP

  • 创建kubelet配置文件 /opt/kubernetes/conf/kubelet.conf
KUBELET="--logtostderr=false \--log-dir=/data/kubernetes/logs/ \--log-file=/data/kubernetes/logs/kubelet.log \--v=4 \--hostname-override=10.10.10.12 \--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig \--bootstrap-kubeconfig=/opt/kubernetes/conf/bootstrap.kubeconfig \--config=/opt/kubernetes/conf/kubelet.config \--cert-dir=/data/k8s_key/ \--pod-infra-container-image=harbor.k8s.test/test/nginx:laster"

hostname-override 取本机IP

  • 创建启动文件 /usr/lib/systemd/system/kubelet.service
[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceAfter=docker.service[Service]Type=simpleEnvironmentFile=-/opt/kubernetes/conf/kubelet.confExecStart=/opt/kubernetes/bin/kubelet $KUBELETRestart=on-failureRestartSec=2LimitNOFILE=65536[Install]WantedBy=multi-user.target
  • 启动kubelet并设置为开机启动
systemctl daemon-reloadsystemctl start kubelet.servicesystemctl enable kubelet.service
  • master服务器上确认csr请求
#查看csr请求信息kubectl get csr#确认csr请求kubectl certificate approve node-csr-OJXR5HcB9oYb8cZCdWeg6iQZgcLiSaORWwsImOBZVS8#查看集群状态(确认csr后注册的node节点)kubectl get nodes
  • 删除过期的csr
kubectl delete csr node-csr-XiPfkW9BD3t3XmpcxbQ3m7CA5NPjNp_2OQSY2Tl94gA
  • 删除过期的node
kubectl delete node 10.10.10.12
  • 集群打标签
kubectl label node 10.10.10.11  node-role.kubernetes.io/master='master'kubectl label node 10.10.10.12  node-role.kubernetes.io/node='node'
  • 查看cs,service,node,csr信息
kubectl get cs,service,node,csr
  • 查看详细信息(服务端)
kubectl describe service

部署kube-proxy

test11、test12

  • 下载、安装
cd /setupwget https://dl.k8s.io/v1.14.0/kubernetes-node-linux-amd64.tar.gz -P /setuptar zxvf kubernetes-node-linux-amd64.tar.gzmkdir -p /opt/kubernetes/{bin,conf} /data/kubernetes/logs/cp kubernetes/node/bin/{kube-proxy,kubectl} /opt/kubernetes/bin/chmod +x /opt/kubernetes/bin/*ln -s /opt/kubernetes/bin/kubectl /usr/sbin/
  • 创建kube-proxy.kubeconfig (master机器上执行)
# 设置集群参数kubectl config set-cluster kubernetes \--certificate-authority=/data/k8s_key/ca.pem \--embed-certs=true \--server="https://apiserver.k8s.test:6443" \--kubeconfig=kube-proxy.kubeconfig# 设置客户端认证参数 kubectl config set-credentials kube-proxy \--client-certificate=/data/k8s_key/proxy.pem \--client-key=/data/k8s_key/proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfig# 设置上下文参数kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfig# 设置默认上下文kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
  • 将bootstrap.kubeconfig复制到配置文件目录并scp到所有node服务器
cp kube-proxy.kubeconfig /opt/kubernetes/confscp /opt/kubernetes/conf/kube-proxy.kubeconfig test12://opt/kubernetes/conf
  • 创建配置文件 /opt/kubernetes/conf/kubelet.conf
KUBELET="--logtostderr=false \--log-dir=/data/kubernetes/logs/ \--log-file=/data/kubernetes/logs/kubelet.log \--v=4 \--hostname-override=10.10.10.12 \--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig \--bootstrap-kubeconfig=/opt/kubernetes/conf/bootstrap.kubeconfig \--config=/opt/kubernetes/conf/kubelet.config \--cert-dir=/data/k8s_key/ \--pod-infra-container-image=harbor.k8s.test/test/nginx:laster"

--hostname-override: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;

  • 创建启动文件 /usr/lib/systemd/system/kube-proxy.service
[Unit]Description=Kubernetes ProxyDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target [Service]Type=simpleEnvironmentFile=-/opt/kubernetes/conf/kube-proxy.confExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXYRestart=on-failureRestartSec=2LimitNOFILE=65536 [Install]WantedBy=multi-user.target
  • 启动kube-proxy程序
systemctl daemon-reloadsystemctl start kube-proxy.servicesystemctl enable kube-proxy.service

转载于:https://www.cnblogs.com/taoyuxuan/p/11205430.html

你可能感兴趣的文章
Vijos P1243 生产产品 (单调队列优化DP)
查看>>
iOS常用第三方库 -转
查看>>
Android布局学习
查看>>
jQuery中事件绑定与解绑
查看>>
js原生Ajax的封装与使用
查看>>
周总结6
查看>>
PostgreSQL 务实应用(二/5)插入冲突
查看>>
一种公众号回复关键词机制
查看>>
java多线程入门学习(一)
查看>>
基于 Web 的 Go 语言 IDE - Wide 1.1.0 公布!
查看>>
nyist oj 138 找球号(二)(hash 表+位运算)
查看>>
Movidius软件手册阅读 2017-09-04
查看>>
ytu 1910:字符统计(水题)
查看>>
201671030110 姜佳宇 实验三作业互评与改进
查看>>
mysql-5.6.15 开启二进制文件
查看>>
python的沙盒环境--virtualenv
查看>>
软件自动化测试——入门、进阶与实战
查看>>
BZOJ1878 [SDOI2009]HH的项链 树状数组 或 莫队
查看>>
BZOJ3675 [Apio2014]序列分割 动态规划 斜率优化
查看>>
2016.10.24 继续学习
查看>>