Skip to content

Latest commit

 

History

History
83 lines (52 loc) · 4.49 KB

README.md

File metadata and controls

83 lines (52 loc) · 4.49 KB

Terraform + libvirt + Ansible = HA Kubernetes Lab

Image

Introduction

A home lab is a great way to explore and learn about different tools, architectures, and development methods.

The idea behind this repository is to use Terraform and Ansible to build a local Kubernetes cluster that is more extensible, and closer to a production architecture than many of the typical single-machine example environments.

About

This repository contains all the Terraform modules and Ansible roles that you need to build a local High Availability Kubernetes cluster that you can experiment with.

The Terraform modules use the libvirt terraform provider to provision a virtual network and virtual machines, so you'll need to be running libvirtd on Linux to be able to use this repository.

The stacked Kubernetes control plane is managed using HAProxy and Keepalived running as static pods on the control plane VMs.

Requirements

To use this repository you will need the following on your local machine:

  • Linux
  • Ansible
  • Terraform >= v0.13
  • libvirt with a default storage pool - the network module in this repository will define a network for you
  • terraform-provider-libvirt
  • Enough CPU, RAM, and disk space to run two libvirt guests - the more the better!

Using this repository

Before using terraform add your public* key, excluding the ssh-rsa prefix in the various variables.tf files corresponding sections:

variable "ssh-public-key" {
  description = "ssh-rsa key for terraform-libvirt user"
  default     = "<KEY GOES HERE>"
}

Running terraform apply with no variable arguments will create 5 Kubernetes nodes - 3 control plane, and 2 nodes for workloads. Each will use 2 CPUs, and have 2GB of RAM allocated.

Before using ansible you need to add the following to your ~/.ssh/config to avoid having fingerprint check botch your ansible configuration:

Host 10.17.3.*
  StrictHostKeyChecking no

Once the VMs are up, running ansible-playbook -i hosts bootstrap.yaml in the ansible directory will bootstrap the Kubernetes control plane on one VM. The role will also generate a token for other control-plane nodes and will use that on the remaining nodes to join them to the cluster.

Kubernetes is accessed using a virtual IP that is managed by HAProxy and Keepalived. The IP address is 10.17.3.254.

Running ansible-playbook -i hosts local-config.yaml will copy admin.conf to the playbook directory to be used along with kubectl as kubectl --kubeconfig admin.conf get namespace.

Each of the VMs has a static IP address for ease of access and keeping track of what lives where. The machines (in the default configuration) are:

k8s-controller-2 10.17.3.2
k8s-controller-3 10.17.3.3
k8s-controller-4 10.17.3.4

k8s-nodes-2 10.17.3.10
k8s-nodes-3 10.17.3.11

The guest hostname are indexed roughly according to their IP address - since 10.17.3.1 is the gateway, the nodes and IPs start at 2.

Todo: fix the k8s-nodes indexing to use an offset of 10.

Helpful resources, kudos, and credits

How To Provision VMs on KVM with Terraform - a great resource to consult if you're just getting started with Terraform and KVM.

Using the Libvirt Provisioner With Terraform for KVM - a more advanced example than the first

Dynamic Cloud-Init Content with Terraform File Templates - templating cloud-init data wouldn't have been possible without this invaluable explanation.

The terraform-provider-libvirt documentation of course!

How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04 - this tutorial formed the basis for the Ansible roles