Launching Kubernetes with Ansible roles

Meherchaitanya
5 min readApr 10, 2021

Launching Kubernetes is a tedious task and this problem can be solved if we can automate the task with configuration tools like ansible.

So I created an ansible collection which consists roles for provisioning the instances and then configuring the cluster. Now below is how I coded the roles

Provisioning role

- name: install boto
pip:
name: boto
- name: create kube master
ec2:
key_name: "{{ key_name }}"
group: "{{ sec_grp }}"
instance_type: "{{ os_type }}"
image: "{{ ami_id }}"
wait: true
region: ap-south-1
exact_count: 1
count_tag:
Name: kube_master
instance_tags:
Name: kube_master
App: kube
register: kube_ec2_master
- name: create kube slaves
ec2:
key_name: "{{ key_name }}"
group: "{{ sec_grp }}"
instance_type: "{{ os_type }}"
image: "{{ ami_id }}"
wait: true
region: ap-south-1
exact_count: 2
count_tag:
Name: kube_slave
instance_tags:
Name: kube_slave
App: kube
register: kube_ec2_slave
  1. First task is to install boto library in the machine which we are using to provision the other instances (I executed this inside localhost).
  2. Then we need to create the Kubernetes Master with required tags and of count 1.
  3. Next, we crate Kubernetes Slave with count as 2 and the other variables will be passed as vars

Vars file

---
# vars file for provision
key_name: mypem11
sec_grp: ssh-only
ami_id: ami-0a9d27a9f4f5c0efc
os_type: t2.micro

Here I have used amazon Linux but using RedHat will be a problem because docker isn't available in rhel8.

Since the provisioning is completed, now we go to configuring all the Kubernetes nodes and from the collection, there is a role named common

Common configuration

- name: install container engine [docker] and iproute-tc
package:
name:
- docker
- iproute-tc
state: present
- name: start the container engine
service:
name: docker
state: started
enabled: yes
- name: Copy kubernetes repo file
copy:
src: kubernetes.repo
dest: /etc/yum.repos.d/
- name: Install kubectl, kubelet, kubeadm programs
yum:
name:
- kubectl
- kubelet
- kubeadm
state: present
disable_excludes: kubernetes
- name: change the cgroup driver to systemd
copy:
src: daemon.json
dest: /etc/docker/
notify: restart docker
- name: enable bridging in iptables
sysctl:
name: net.bridge.bridge-nf-call-iptables
value: '1'
- name: enable bridging in ip6tables
sysctl:
name: net.bridge.bridge-nf-call-ip6tables
value: '1'
  • Here, first the packages of docker and iproute-tc (used for ip table config) are installed and then the docker service is started and enabled
  • Now the Kubernetes repo file is copied from the controller node to the managed node
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
  • Now we install the Kubernetes related software from the configured repo in before step and also disabling the excluded repos.
  • Now we need to change the cgroup driver to systemd and then we to restart the docker file if there is a change with handler notification
- name: restart docker
service:
name: docker
state: restarted
  • Now we need to enable the bridging in iptables of ipv4 and ipv6

Now we need to configure the master node

Master Node config

- name: init kubectl
command: "kubeadm init --pod-network-cidr={{ net_name }} --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
ignore_errors: yes
- name: create .kube dir
file:
path: $HOME/.kube
state: directory
mode: 0755
- name: copy config
command: cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- name: change ownership
shell: chown $(id -u):$(id -g) $HOME/.kube/config
- name: create token for connecting workernodes
command: 'kubeadm token create --print-join-command'
register: jointoken
- name: create flannel overlay network
shell: "curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | sed 's@10.244.0.0/16@{{net_name}}@' > kube-flannel.yml; kubectl apply -f kube-flannel.yml"
  • First stem is to initialize the kubernetes cluster with kubeadm init command and the pod network is provided from the user input
  • After cluster creation, we need to copy the config file to access the endpoint of Kubernetes to the $HOME/.kube directory
  • After this, we need to create a token that can be used for the worker nodes to join the cluster.
  • Then we create a overlay network with the help of flannel. but we need to change the default network name to the user’s network name specified and we can achieve this with sed command.

Now in slave node all we need to do is to get the command output from the master node for the token so that the slave node can connect to the master

Slave Node configuration

- name: run the join command
shell: "{{join_command}}"
ignore_errors: yes

Now we need to create a playbook to make this work

First we need to configure our dynamic inventory that I have already discussed in my previous articles.

plugin: amazon.aws.aws_ec2
regions:
- ap-south-1
keyed_groups:
- key: tags
prefix: tag
hostnames:
- ip-address

After we have the dynamic inventory, we need to configure our ansible.cfg file which is as below

[defaults]
inventory=$HOME/inventory/aws_ec2.yml
host_key_checking=False

provision.yml for provisioning the instances

- hosts: localhost
collections:
- smc181002.k8s
tasks:
- name: launch nodes
include_role:
name: provision
vars:
key_name: mypem11
sec_grp: allow-all
ami_id: ami-0bcf5425cdc1d8a85
os_type: t2.micro
- set_fact:
instances: "{{ kube_ec2_master.instances + kube_ec2_slave.instances }}"
- name: wait for ssh to start
wait_for:
host: "{{item.public_dns_name}}"
port: 22
state: started
loop: "{{instances}}"

Now that we provisioned the instances, we need to create the playbook to configure the nodes with the collection roles

main.yml for configuration

- hosts: tag_App_kube
remote_user: ec2-user
become: yes
become_user: root
collections:
- smc181002.k8s
roles:
- common
- hosts: tag_Name_kube_master
remote_user: ec2-user
become: yes
become_user: root
collections:
- smc181002.k8s
tasks:
- name: run master role
include_role:
name: master
vars:
net_name: "10.240.0.0/16"
- set_fact:
join_command: "{{ jointoken }}"
- debug:
msg: "{{hostvars[groups['tag_Name_kube_master'][0]]['join_command']['stdout']}}"
- hosts: tag_Name_kube_slave
remote_user: ec2-user
become: yes
become_user: root
collections:
- smc181002.k8s
tasks:
- name: slave config
include_role:
name: slave
vars:
join_command: "{{hostvars[groups['tag_Name_kube_master'][0]]['join_command']['stdout']}}"
  • The tag_App_kube is the group which includes all the nodes of the cluster which have a common configuration and common role from smc181002.k8s is used
  • Next comes the master node configuration and we use the master role from smc181002.k8s and then we save the token command output to host vars by setting with set_facts module
  • Then we configure the slaves by passing the command that the nodes should run to join the kubernetes cluster from the host vars that is set in the master node config role.
  • And after this, we can run this by first adding the ssh key for the instances so that ansible can contact the instances in AWS.
eval `ssh-agent`;
ssh-add /path/to/pem_file
  • After running the playbook commands, we can now login to the master node and use kubectl commands from the root user just like we do in minikube.

Hope you enjoyed the article and found it useful

Ansible Galaxy — ansible collection link

https://github.com/smc181002/k8s/ — github code

smc181002/create_kube_cluster (github.com) — includes the playbook files too

--

--