Fedora上安装kubernetes(Single Node)

2018-05-06 2162点热度 0人点赞 0条评论

官方文档链接

https://kubernetes.io/docs/getting-started-guides/fedora/fedora_manual_config/

准备安装环境

准备主机

准备了2台虚拟机,信息如下:

fed-master = 192.168.2.86
fed-node = 192.168.2.87

切换到root用户安装kubernetes和etcd

dnf -y install kubernetes
dnf -y install etcd

在所有机器上把fed-master和fed-node添加到/etc/hosts

echo "192.168.2.86    fed-master
192.168.2.87    fed-node" >> /etc/hosts

编辑/etc/kubernetes/config,指向主节点服务器

所有主机节点应该一致

KUBE_MASTER="--master=http://fed-master:8080"

安装iptables-services

dnf install iptables-services  

在所有主机上禁用防火墙

systemctl mask firewalld.service;
systemctl stop firewalld.service;

systemctl disable iptables.service;
systemctl stop iptables.service;

在主节点(fed-master)上配置Kubernetes服务

配置kube-apiserver

编辑/etc/kubernetes/apiserver,类似如下配置,必须保证service-cluster-ip-range的IP配置在其他地方没有使用过。

# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379,http://127.0.0.1:4001"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
# KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

注意:这里注释掉了KUBE_ADMISSION_CONTROL的配置,先把安全选项配置去掉

配置etcd

编辑/etc/etcd/etcd.conf,让etc监听所有的IP而不只是127.0.0.1

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

启动主节点服务

包含服务如下:etcd kube-apiserver kube-controller-manager kube-scheduler

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
    sudo systemctl restart $SERVICES
    sudo systemctl enable $SERVICES
    sudo systemctl status $SERVICES
done

在节点(fed-node)上配置Kubernetes

配置kubelet

编辑/etc/kubernetes/kubelet,类似如下:

###
# Kubernetes kubelet (node) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=fed-node"

# location of the api-server
KUBELET_ARGS="--cgroup-driver=systemd --fail-swap-on=false  --kubeconfig=/etc/kubernetes/master-kubeconfig.yaml --require-kubeconfig"

新增/etc/kubernetes/master-kubeconfig.yaml文件,内容如下:
vi /etc/kubernetes/master-kubeconfig.yaml

kind: Config
clusters:
- name: local
  cluster:
    server: http://fed-master:8080
users:
- name: kubelet
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context

启动节点(fed-node)上的服务

包含如下服务:kube-proxy kubelet docker

for SERVICES in kube-proxy kubelet docker; do
    sudo systemctl restart $SERVICES
    sudo systemctl enable $SERVICES
    sudo systemctl status $SERVICES
done

验证安装

在主节点(fed-master)主机上验证是否能够看到工作节点(fed-node),并且是ready状态。

kubectl get nodes

返回信息类似如下:

[root@fed-master wangxianfeng]# kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
fed-node   Ready     <none>    1m        v1.9.3

排雷

主节点(fed-master)上无法获取工作节点(fed-node)

一步步按照上面的做了,但是最后执行kubectl get nodes,返回的却是No resources found.很是窝心啊,怎么排查问题呢?
1. 查看主节点上的进程是否都在
ps -ef|grep kube
看到该有的进程都有,再看工作节点(fed-node)上的进程,发现没有kubelet的进程,使用 systemctl status kubelet查看错误信息如下:

[root@fed-node wangxianfeng]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2018-05-10 10:23:01 CST; 10min ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
  Process: 1612 ExecStart=/usr/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV >
 Main PID: 1612 (code=exited, status=1/FAILURE)
      CPU: 564ms

5月 10 10:23:01 fed-node systemd[1]: kubelet.service: Consumed 564ms CPU time
5月 10 10:23:01 fed-node systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
5月 10 10:23:01 fed-node systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
5月 10 10:23:01 fed-node systemd[1]: Stopped Kubernetes Kubelet Server.
5月 10 10:23:01 fed-node systemd[1]: kubelet.service: Consumed 564ms CPU time
5月 10 10:23:01 fed-node systemd[1]: kubelet.service: Start request repeated too quickly.
5月 10 10:23:01 fed-node systemd[1]: kubelet.service: Failed with result 'exit-code'.
5月 10 10:23:01 fed-node systemd[1]: Failed to start Kubernetes Kubelet Server.

发现无法查看具体的报错信息,然后使用journalctl -f跟踪日志,重新启动,查看日志,发现报错信息如下

5月 10 10:49:42 fed-node kubelet[3063]: Error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

我明明配置了kubelet使用systemd,但是为啥没生效呢,那就是配置错了,以下这么配置是不行了,都放在一行进行配置吧,不要问我为什么,我也不知道:

# location of the api-server
KUBELET_ARGS="--cgroup-driver=systemd  --kubeconfig=/etc/kubernetes/master-kubeconfig.yaml --require-kubeconfig"
~
# Add your own!
KUBELET_ARGS="--fail-swap-on=false"

虚拟机虚拟内存的问题

报错日志如下:

error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false

解决方法是编辑kubelet的配置文件sudo vi /etc/kubernetes/kubelet
在–cgroup-driver=systemd加上配置变为–cgroup-driver=systemd –fail-swap-on=false

iptables-restore问题

日志里边一直报这么个错,不知道为啥

5月 10 11:06:00 fed-node kube-proxy[946]: E0510 11:06:00.899555     946 proxier.go:1667] Failed to execute iptables-restore: exit status 1 (iptables-restore: invalid option -- '5'

王显锋

激情工作,快乐生活!

文章评论