easy kubernetes deployment / management with ansible in GFW.
Now only Kubernetes 1.8.2 is tested.
- forked version of kubespray (https://github.com/januszry/kubespray.git)
- Now only AWS is supported
- Tested OS and kernel: CentOS 7 with kernel >= 4.12
- See requirements.txt
- If you don't want to use proxy, don't set
http_proxy, https_proxy or no_proxy
to empty string. Comment them out or you'll get a wrong configuration file in /etc/systemd/system/docker.service.d; - DONOT enable docker.service with
systemctl enable docker.service
before add the machine to cluster, or our docker configuration will not take effect.
- Choose an unique environment name (will be referred to as
<env>
later) since this tool supports deployment of multiple environments. - Export environment variables:
export kenv=<env> && export KUBECONFIG=~/.kube/env-$kenv/config
- Install requirements:
pip install -r requirements.txt
- Launch machines
- etcd machines:
m4.large * 3
tagged withk8s-group=etcd
- master machines:
c4.large * 2
tagged withk8s-group=kube-master
- node machines: anytype, at least one, tagged with
k8s-group=kube-node,k8s-node-role=<role>
- etcd machines:
- Tag above machines with:
ansible-app=ansible-k8s,k8s-env=<env>
- (Optional) Tag node machines with:
k8s-node-role=<role>
, which will make the nodes be labeled withrole=<role>
- Add apiserver (master instances) behind a loadbalancer
- Modify vars in
ans/inventory/group_vars/all.yml
(Optional, you can also set them as extra vars in the next 2 steps)apiserver_loadbalancer_domain_name
: address of the loadbalancer for apiserverloadbalancer_apiserver.address
: same as aboveloadbalancer_apiserver.port
bootstrap_os
- Deploy:
ansible-playbook -i inventory/inv-ec2.py -u <username> -b kubespray/cluster.yml
- Scale:
ansible-playbook -i inventory/inv-ec2.py -u <username> -b kubespray/scale.yml
- Copy kubeconfig:
ansible-playbook -i inventory/inv-ec2.py -u <username> playbooks/kubeconfig.yml
- Check cluster is running:
kubectl cluster-info && kubectl get nodes --show-labels
- To manage multiple environments with kubectl, you have several choices since kubeconfig is copied to
~/.kube/env-<env>
(will be referred to as<home>
later)- specify kubeconfig:
kubectl --kubeconfig <home> ...
- export one time then call
kubectl
freely in current terminal session:export KUBECONFIG=<home>/config
- make symlink manually:
ln -sf <home>/config ~/.kube/config && ln -sf <home>/ssl ~/.kube/ssl
- specify kubeconfig:
- To remove node:
kubectl drain <node>
kubectl delete node <node>
- on master:
calicoctl delete node <node>
- (Optional)
tools/aws.py detach -g <group> <node>
- (Optional)
tools/aws.py terminate <node>
- Expand disk if necessary
- e.g.
xfs_growfs /var/lib/docker
- e.g.
- Label nodes tagged with
k8s-node-role
with the role and other labels:ansible-playbook -i inventory/inv-ec2.py -u <username> playbooks/label.yml
kubectl get nodes --show-labels
(now nodes have more labels)
- To mark nodes as deployed:
tools/aws.py tag <list of nodes separated by blank or comma>
- ELB may not get ready immediately after apiserver is up, thus the boot may fail in the middle. Wait until it is ready and try again will solve the problem.