This will cover the quickest route to seeing nginx working in a development cluster:
- Installing minikube.
- Starting nginx.
- Opening up ports to the outside world.
- Creating new ingress rules.
- Seeing ingress changes propagate to the nginx config itself.
- Configuring udp/tcp streams.
- Curling your cluster.
Some background terminology first. One of the objects that you can create within the kubernetes system is a deployment. This allows you to specify how many containers (pods) of a given docker image you want to be fired up. So the deployment basically wraps the concept of a container (and a pod) and adds a replica requirement so that kubernetes can continually try to match your expectations. You might want 3 replicas of a given docker image or maybe you only want one.
Then you have services. Services basically let the kubernetes ecosystem know which ports a given service can be accessed on. Generally these will be defined with the string value "ClusterIp", which makes them only accessible from within the cluster. In this way the system can always access in internal service while the deployment adds or removes containers (pods) on the fly based on the deployment definition.
On debian download the minikube_0.30-0.deb file from the link below and install it using sudo dpkg -i minikube_.deb.
Download Available Here Along With Instructions For Other Operating Systems
Prepare Virtual Machine Libraries
You need to make sure your operating system can create virtual machines.
$ sudo apt install libvirt-clients libvirt-daemon-system qemu-kvm $ sudo usermod -a -G libvirt $(whoami) $ newgrp libvirt
The kvm2 driver is a good bet when it comes to linux.
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 $ sudo install docker-machine-driver-kvm2 /usr/local/bin/
Note that depending on your hardware you might have to enable virtualization in your bios. In terms of linux kvm2 seems to be a reliable option.
Check your minikube version.
$ minikube version minikube version: v0.30.0
Start your local cluster and then enable ingress. Kubernetes will create an ingress specific deployment.
Ingress is the label given to the type of object within kubernetes that handles how external requests can be mapped to internal ip addresses. Once upon a time you might have spent your time manually editing nginx config files. That is no longer the case and was my biggest struggle with getting to understand ingress.
$ minikube start --vm-driver kvm2 $ minikube addons enable ingress
Ok so how about we quickly log into nginx and see what is going on. Get a listing of the pods and then choose the nginx-ingress-controller pod. In my case it is the value below but it will differ on your system.
$ kubectl get pods -n kube-system # Find your nginx pod in the resulting list. $ kubectl exec -it -n kube-system nginx-ingress-controller-8566746984-vdjq8 -- /bin/bash # Now you are viewing the bash shell of the nginx pod so take a look at nginx.conf. [email protected]:/etc/nginx$ cat nginx.conf
That will show you the current state of your nginx.conf file. Simply type 'exit' to exit the pod. An equally useful thing to know is that you can tail the logs of the nginx pod as below. When you finally make ingress changes, you will see the nginx process updating its own config file.
$ kubectl logs -n kube-system nginx-ingress-controller-8566746984-vdjq8 -f
Opening Up Ports
First of all, if you want a new port exposed you need to make sure that the ingress controller is made aware of this. As far as I can tell the controller may well be akin to a load balancer. While it is responsible for starting up the nginx container, editing the port's section of the configuration doesn't actually affect the nginx config. It will however start up an entirely new container (pod). You will see that happen if you are tailing the nginx log while editing the controller config.
Look at the new deployment.
$ kubectl get deployments -n kube-system nginx-ingress-controller -o yaml
You will see that ports 80, 443, and 18080 are made available. Port 18080 is apparently for heath checks and nginx stats.
Add a new port.
$ kubectl edit deployments -n kube-system nginx-ingress-controller # add appropriate values under "ports:" EG: ports: - containerPort: 8080 hostPort: 8080 protocol: TCP
Create New Ingress Rules
Now you may or may not have opened up a new port, but what we do finally want to see is that the nginx.config file receive updates based on instructions that you give kubernetes. This happens by creating a yaml file of kind: Ingress.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: domain.com http: paths: - path: /health backend: serviceName: backend-a servicePort: 10001 - path: / backend: serviceName: backend-b servicePort: 10002
If you log back into your nginx, as above using the /bin/bash command, you will see that the nginx.conf file has been updated.
Adding TCP/UDP Streams To Nginx
Create a new file, tcp-services.yaml, with the following contents:
apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: 9000: "default/example-go:8080"
Let kubernetes know about this change:
$ kubectl apply -f tcp-services.yaml
The nginx controller is already aware that it should be watching for this change. How? If you look back at the deployment for the controller, you will see that an argument flag is being passed: --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
$ kubectl get deployments -n kube-system nginx-ingress-controller -o yaml # look for the following: containers: - args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --annotations-prefix=nginx.ingress.kubernetes.io - --report-node-internal-ip-address
This will watch for changes to a configmap called tcp-services, which is what you just passed to kubernetes, and update nginx.conf accordingly.
Curling Your Cluster
Finally curl the minikube cluster on the new ports. As long as you have set up your own services and deployments on port 80 or the port you opened, you will get back a response. You don't have to specify the host, but if you added "domain.com" to the ingress rules, this will make sure that your curl request is hits the rule set for "domain.com".
$ curl http://`minikube ip`/ -H "Host:domain.com" -k