Overview
This post covers 3 major topics:
- Kubernetes Lab Environment: The steps that I took to setup a Kubernetes cluster from scratch, in a self-hosted virtualised environment using Oracle Virtualbox and Hashicorp Vagrant
- Kubernetes Notes, Commands and Manifests: A collection of commands which I noted down as part of my preparation for the Certified Kubernetes Administrator exam
- Information Sources: A list of links to the main sources of information which I used when creating this environment and learning Kubernetes
Kubernetes Lab Environment
The VMs which formed the Kubernetes cluster were deployed on a Dell Precision workstation with plenty of CPU and RAM resources:
edrandall@precision:~$ cat /proc/cpuinfo | grep processor | wc -l
24
edrandall@precision:~$ free -g
total used free shared buff/cache available
Mem: 62 1 1 0 59 59
Swap: 1 0 1
Infrastructure Layout
The following diagram shows how the environment was created out on the Dell Workstation. All systems (apart from my macbook, ran ubuntu linux):
In order to allow network access to applications running on pods, the following configuration was made on my macbook, the dell workstation and each VM created by vagrant:
Route on Macbook:
edrandall@Eds-MacBook-Pro ~ % sudo netstat -nr | grep 123
10.123/16 10.10.11.11 UGSc en0
IP forwarding enabled on precision workstation:
edrandall@precision:~$ cat /proc/sys/net/ipv4/ip_forward
1
Route created on all VMs:
function tln_route {
echo "Creating a route back to 10.10.11.0/24"
ip route add 10.10.11.0/24 via $HOST_NET.1 onlink dev enp0s8
}
Detailed Configuration Information
The VagrantFile, and custom deployment scripts can be found in the test-environment
folder on my
Kubernetes GitHub repository
.
Kubernetes Notes, Commands and Manifests
Setting up (tab) autocomplete for a bash shell
source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
Create an alias ‘k’ and enabling autocompletion to work with it
alias k=kubectl
complete -o default -F __start_kubectl k
Deploy a messaging pod using the redis:alpine image with the labels set to tier=msg
k run messaging --image=redis:alpine --labels="tier=msg"
Create a namespace called production
k create namespace production
Get the list of nodes in JSON format and store it in a file at /opt/outputs/nodes.json
k get nodes -o json > /opt/outputs/nodes.json
Create a service messaging-service to expose the messaging application within the cluster on port 6379
k expose pod messaging --port 6379 --name=messaging-service
Create a static pod named static-busybox on the controlplane node that uses the busybox image and the command sleep 1000
k run static-busybox --image=busybox --dry-run=client -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yaml
Forcibly recreate a pod after doing a ‘k edit podname’
k replace --force -f /tmp/output-file-name.yaml
Create a deployment named hr-web-app using the image nginx:alpine with 2 replicas
k create deploy hr-web-app --image=nginx:alpine --replicas=2
Expose the deployment called hr-web-app on a NodePort of 30082
- Generate a yaml file
k expose deployment hr-web-app --name=hr-web-app-service --type=NodePort --dry-run=client -o yaml --port 8080 > hr-service.yaml
- Edit the yaml file to add the NodePort value (nodePort: 30082)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: hr-web-app
name: hr-web-app-service
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
nodePort: 30082
selector:
app: hr-web-app
type: NodePort
status:
loadBalancer: {}
- Apply the manifest file
k apply -f hr-service.yaml
Use a JSON PATH query to retrieve the OS image names of all the nodes
k get nodes -o jsonpath='{.items[*].status.nodeInfo.osImage}'
Create a persistent volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-analytics
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostPath:
path: /pv/data-analytics
Backup etcd
- Ensure correct API Version will be used.
export ETCDCTL_API=3
- Get the locations of the cert, ca and key files
grep file /etc/kubernetes/manifests/etcd.yaml
- Perform the backup
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--endpoints=localhost:2379 \
snapshot save backup-file
Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir that lasts for the life of the Pod.
- Pod ‘redis-storage’ uses volumeMount with mountPath = /data/redis
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: redis-storage
name: redis-storage
spec:
containers:
- image: redis:alpine
name: redis-storage
volumeMounts:
- mountPath: /data/redis
name: cache-volume
volumes:
- name: cache-volume
emptyDir:
sizeLimit: 500Mi
Create a new pod called super-user-pod with image busybox:1.28.
- Allow the pod to be able to set system_time
apiVersion: v1
kind: Pod
metadata:
labels:
run: super-user-pod
name: super-user-pod
spec:
containers:
- command:
- sleep
- "4800"
image: busybox:1.28
name: super-user-pod
securityContext:
capabilities:
add: ["SYS_TIME"]
Create a PersistentVolume and PersistentVolumeClaim
Create the PVC:
This assumes the Persistent Volume is already created. The access mode and size mentioned in the pvc should match the volume: k get pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
Create the pod:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: use-pv
name: use-pv
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: my-pvc
containers:
- image: nginx
name: use-pv
volumeMounts:
- mountPath: "/data"
name: task-pv-storage
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica
k create deployment nginx-deploy --image=nginx:1.16 --replicas=1
Upgrade the deployment to use image nginx:1.17
kubectl set image deployment/nginx-deploy nginx=nginx:1.17
Create a new user and apply permissions
-
Create a new user
- Create csr.yaml file
- Include the given CSR (base64)
- Change csr name (metadata)
- Create the CSR in kubernetes
k create -f csr.yaml
- Check the CSR is there
k get csr
- Approve the CSR
k approve csr-name
-
Create a new role
- Create a new role in the development namespace
k create role developer --verb=create,get,list,update,delete --resource=pods -n development
- Get help with creating a role
k create role -h
- Create a new role in the development namespace
-
Create a role binding
- Check if the user can do something
k auth can-i get pods --namespace=development --as john
- Create the rolebinding
k create rolebinding john-developer --role=developer --user=john --namespace=development
- Check again if the user can do something
k auth can-i get pods --namespace=development --as john
- Check if the user can do something
Taking a snapshot of etcd (running as a static pod)
Find locations of needed cert,cacert and key files:
grep file /etc/kubenetes/manifests/etcd.yaml
Perform the backup:
ETCDCTL_API=3 etcdctl snapshot save \
--key="/path/to/key" \
--cert="/path/to/cert" \
--cacert="/path/to/cacert" \
/backup/filename
Information Sources
To create this environment, I used information from a variety of sources, including:
- The Official Kubernetes Documentation
- Kodekloud's CKA Certification Course
- The Official Hashicorp Vagrant Documentation
- The Linux Foundation's Kubernetes Fundamentals Course
- Kelsey Hightower's 'Kubernetes the Hard Way'
- Certified Kubernetes Administrator exam page.
- Oracle Virtualbox
- Hashicorp Vagrant