Services
Services
Basic Kubernetes Routing Models
By the end of this exercise, you should be able to:
- Route traffic to a kube deployment using the correct tool in each of the following scenarios:
- Cluster-internal traffic to a stateless deployment
- Cluster-internal traffic to a stateful deployment
- Ingress traffic to a stateless deployment
- Establish a network policy to block traffic to or from a pod
- Route requests between pods across Kubernetes namespaces
Service Types
Kubernetes supports different types of services for different use cases:
- A
ClusterIP
service makes it accessible from any of the Kubernetes cluster’s nodes. The use of virtual IP addresses for this purpose makes it possible to have several pods expose the same port on the same node – All of these pods will be accessible via a unique IP address. - For a
NodePort
service, Kubernetes allocates a port from a configured range (default is 30000-32767), and each node forwards that port, which is the same on each node, to the service. It is possible to define a specific port number, but you should take care to avoid potential port conflicts. - For a
LoadBalancer
service, Kubernetes provisions an external load balancer. The type of service depends on how the specific Kubernetes cluster is configured. - An
ExternalIP
service uses an IP address from the predefined pool of external IP addresses routed to the cluster’s nodes. These external IP addresses are not managed by Kubernetes; they are the responsibility of the cluster administrator. - It is also possible to create and use services without selectors. A user can manually map the service to specific endpoints, and the service works as if it had a selector. The traffic is routed to endpoints defined by the user
ExternalName
is a special case of service that does not have selectors. It does not define any ports or endpoints. Instead, it returns the name of an external service that resides outside the cluster. Such a service works in the same way as others, with the only difference being that the redirection happens at the DNS level without proxying or forwarding. Later, you may decide to replace an external service with an internal one. In this case, you will need to replace a service definition in your application.
Service Discovery
Kubernetes supports two modes of finding a service: through environment variables and DNS.
Environment Variables
Kubernetes injects a set of environment variables into pods for each active service. The order is important: the service should be created before, for example, the replication controller or replica set creates a pod’s replicas. Such environment variables contain service host and port, for example:
MYSQL_SERVICE_HOST=10.0.150.150
MYSQL_SERVICE_PORT=3306
An application in the pod can use these variables to establish a connection to the service.
DNS
Kubernetes automatically assigns DNS names to services. A special DNS record can be used to specify port numbers as well. To use DNS for service discovery, a Kubernetes cluster should be properly configured to support it.
Routing Cluster-Internal Traffic
By cluster-internal traffic, we mean traffic from originating from a pod running on your cluster, sending a request to another pod running on the same cluster. We need to consider two cases:
- Stateless deployments: can be load balanced across freely. A containerized API would be a common example.
- Stateful deployments: we may need to make explicit and consistent decisions about what container we send a request to. A containerized database using simple local storage is a common example.
Services (also called microservices) are objects which declare a policy to access a logical set of Pods. They are typically assigned with labels to allow persistent access to a resource, when front or back end containers are terminated and replaced.
Native applications can use the Endpoints API for access. Non-native applications can use a Virtual IP-based bridge to access back end pods. ServiceTypes Type could be:
ClusterIP
default - exposes on a cluster-internal IP. Only reachable within clusterNodePort
Exposes node IP at a static port. A ClusterIP is also automatically created.LoadBalancer
Exposes service externally using cloud providers load balancer. NodePort and ClusterIP automatically created.ExternalName
Maps service to contents of externalName using aCNAME
record.
We use services as part of decoupling such that any agent or object can be replaced without interruption to access from client to back end application.
Routing to Stateless Deployments
Routing to stateless deployments internal to a cluster relies on use of a ClusterIP
service, which will provide a
stable networking endpoint for a collection of containers.
Create a deployment file as follow:
apiVersion: apps/v1
kind: Deployment
metadata:
name: destination
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: destination
template:
metadata:
labels:
app: destination
spec:
containers:
- name: destination-container
image: r.deso.tech/whoami/whoami
Download deploy-destination.yaml
here.
- Declare a ClusterIP service to route traffic to your
destination
deployment internally:
Create an service file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: destination
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: destination
template:
metadata:
labels:
app: destination
spec:
containers:
- name: destination-container
image: r.deso.tech/whoami/whoami
This service will route traffic sent to port 8080
on the IP generated by this service, to port 80
in pods which
match the app: destination
label selector.
- Declare another deployment we’ll use to send traffic to the first:
Copy deploy-destination.yaml
to deploy-origin.yaml
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: origin
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: origin
template:
metadata:
labels:
app: origin
spec:
containers:
- name: origin-container
image: r.deso.tech/library/netshoot
command: ["sleep"]
args: ["1000000"]
- Now apply deploy-destination.yaml
kubectl apply -f deploy-destination.yaml
deployment.apps/destination created
Now apply deploy-origin.yaml
.
kubectl apply -f deploy-origin.yaml
deployment.apps/origin created
Now apply svc-destination.yaml
.
kubectl apply -f svc-destination.yaml
service/svc-destination created
- Run
kubectl exec
to the container in theorigin
deployment:
kubectl get pod
NAME READY STATUS RESTARTS AGE destination-6f66576498-h4n7x 1/1 Running 0 2m32s destination-6f66576498-j84m4 1/1 Running 0 2m32s destination-6f66576498-xc9kh 1/1 Running 0 2m32s origin-5b945ff9c5-ncx9r 1/1 Running 0 2m10s
kubectl exec -it origin-5b945ff9c5-ncx9r -- /bin/bash
bash-5.0#
At the terminal, use nslookup
to see what svc-destination
resolves as:
nslookup svc-destination
Server: 10.96.0.10 Address: 10.96.0.10#53 Name: svc-destination.default.svc.cluster.local Address: 10.100.76.46
The Address
on the last line is the cluster IP chosen for this service.
Traffic to port 8080 (which we defined in the yaml for our service above) at this IP will get randomly load balanced
across pods matching the label selector provided.
- In the same terminal, try
curl
ing the service name and port:
curl svc-destination:8080
Hostname: destination-6f66576498-j84m4 IP: 127.0.0.1 IP: 192.168.171.102 GET / HTTP/1.1 Host: svc-destination:8080 User-Agent: curl/7.65.1 Accept: */*
curl svc-destination:8080
Hostname: destination-6f66576498-h4n7x IP: 127.0.0.1 IP: 192.168.171.103 GET / HTTP/1.1 Host: svc-destination:8080 User-Agent: curl/7.65.1 Accept: */*
curl svc-destination:8080
Hostname: destination-6f66576498-h4n7x IP: 127.0.0.1 IP: 192.168.171.103 GET / HTTP/1.1 Host: svc-destination:8080 User-Agent: curl/7.65.1 Accept: */*
curl svc-destination:8080
Hostname: destination-6f66576498-h4n7x IP: 127.0.0.1 IP: 192.168.171.103 GET / HTTP/1.1 Host: svc-destination:8080 User-Agent: curl/7.65.1 Accept: */*
exit
Kubernetes’ ClusterIP
services route traffic randomly to pods it serves.
- Delete your
svc-destination
service:
kubectl delete -f svc-destination.yaml
service "svc-destination" deleted
NodePort
Routing Ingress Traffic
In the case that we want to allow traffic in to a deployment from the outside world, we need to route traffic from a
port in the host’s network namespace onto a port in the pod’s network namespace; we can accomplish this through a
NodePort
service.
Ingress Traffic to Stateless deployments
In the case of stateless deployments, we want to build off of the ClusterIP
service that we saw in the first section
above. To do so, we’ll create a NodePort
service which will forward traffic from a random port on every host in our
cluster to a ClusterIP
service it automatically creates for us.
- Re-create your
svc-nodeport-destination
service as aNodePort
service:
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport-destination
namespace: default
spec:
type: NodePort
selector:
app: destination
ports:
- port: 8080
targetPort: 80
Key values here being the type: NodePort
to declare this as the desired service type, and the port
and targetPort
keys which have the same meaning as the ClusterIP service we created in the first exercise above.
kubectl apply -f svc-nodeport-destination.yaml
service/svc-nodeport-destination created
- Find the services and look the TYPE,CLUSTER-IP,and PORT(S) (should be something in the 32768-35535 range).
This is the port your deployment pods are reachable at, on any IP in your kube cluster.
kubectl get svc svc-nodeport-destination
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-nodeport-destination NodePort 10.99.112.146 <none> 8080:31202/TCP 101s
Open your browser using <ip_manager>
:<node port>
from your ubuntu
graphic node to confirm the traffic is routed as
expected.
You should see something like this:
Hostname: destination-6f66576498-j84m4
IP: 127.0.0.1
IP: 192.168.171.102
GET / HTTP/1.1
Host: 10.10.98.10:31202
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.9,it;q=0.8
Connection: keep-alive
Dnt: 1
Upgrade-Insecure-Requests: 1
- Delete your
svc-nodeport-destination
services as above.
kubectl delete svc svc-nodeport-destination
service "svc-nodeport-destination" deleted
Delete your destination
deployment.
kubectl delete deploy destination
deployment.extensions "destination" deleted
Delete your origin
deployment.
kubectl delete deploy origin
deployment.extensions "origin" deleted
LoadBalancer
Now, we’ll see how to expose our web app using a service LoadBalancer. First of all, create a simple deployment based on nginx:
kubectl create deployment web01 --image=nginx
Check the status deployment:
kubectl get deploy
Now, we expose the deployment “web01” using an imperative command:
kubectl expose deployment web01 --type LoadBalancer --port 80 --name lb-web01
Check the service status:
kubectl get svc
Take note about lb-web01
’s external IP.
Try to access. Open your web browser and type “http://
At this point, you’ll see Nginx’s welcome page. Well done!
We can examine the svc created using this command:
kubectl edit svc lb-web01
Here, you can see the manifest generated by kubernetes.
ExternalName
Services of type ExternalName
map a Service to a DNS name, not to a typical selector such as my-service
or
cassandra
.
You specify these Services with the spec.externalName
parameter.
This Service
definition, for example, maps the my-service
Service in the default
namespace to
my.database.example.com
:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
type: ExternalName
externalName: my.database.example.com
When looking up the host my-service.default.svc.cluster.local
, the cluster DNS Service returns a CNAME
record with
the value my.database.example.com
. Accessing my-service
works in the same way as other Service
s but with the
crucial difference that redirection happens at the DNS level rather than via proxying or forwarding. Should you later
decide to move your database into your cluster, you can start its Pod
s, add appropriate selectors or endpoints, and
change the Service
’s type.
BlueGreen deployment
Create Namespace and change context
Begin by creating a namespace:
kubectl create namespace bluegreen
namespace/bluegreen created
Let’s change the context
kubectl config set-context --current --namespace bluegreen
Context "xxxxx@xxxxxx" modified.
We start creating an application version 1.0, we will call it phippy
, for you this is the blue application.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kubeanimals
version: "1.0"
name: phippy
spec:
replicas: 1
selector:
matchLabels:
app: kubeanimals
version: "1.0"
strategy: {}
template:
metadata:
labels:
app: kubeanimals
version: "1.0"
spec:
containers:
- image: r.deso.tech/whoami/whoami
name: phippy
ports:
- containerPort: 80
resources: {}
env:
- name: NAME_APPLICATION
value: "phippy"
kubectl apply -f kubeanimals-1.yaml
deployment.apps/phippy created
Now expose the application:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: kubeanimals
version: "1.0"
name: kubeanimals
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: kubeanimals
version: "1.0"
type: LoadBalancer
status:
loadBalancer: {}
kubectl apply -f kubeanimals-svc.yaml
service/kubeanimals created
Check external IP assigned by LoadBalancer
:
kubectl get svc kubeanimals --output jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'
10.10.99.121
curl
the IP obtained. You will see the Phippy application:
curl 10.10.99.121
@@@@@@@@@@@LtC@@@@@0tf8@@@@@@@@@@@@@@@@@@@@@@ @@@@@@@88@0ii;G@8@0i;;C@@@@@@@@@@@@@@@@@@@@@@ @@@@@8LLfLCtfLLLLfffftCGCLL0@@@@@@@@@@@@@@@@@ @@@@@8LLLLLfLLLLLtfLLLLLfLfC@@@@@@@@@@@@@@@@@ @@@@@@8LLCi18LLLf,C0LLLLfLL8@@@@@@@@@@@@@@@@@ @@@@@@@GLLi,tLLLf:;CLLLLLG8@@@@@@@@@@@@@@@@@@ @@@@@@0LLLCLLLLLLCLLLLLLG@@@@@@@@@@@@@@@@@@@@ @@@@@@CLLLGGGGCCCLLLLLLLC@@@@@@@@@@@@@@@@@@@@ @@@@@0fCCCGGGGCCGCLLLLLLC@@@@@@@@@@@@@@@@@@@@ @@@@@@GCGGGGGCLLGGLLLLLLC@@@@@@@@@@@@@@@@@@@@ @@@@@@@0CGCffftfGLLLLLfG@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@80GLfLLCLLLLLLL8@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@@LLLLLLLLLLCL8@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@8CLLLLLLLLft10@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@8f11fLLLLfii;C@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@0iii1LLLL1i1it@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@0iiiLLLL1iiiit@@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@C;iiLCLLLt1tfLG@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@@LttfLttfLLCLLLL@@@@@@@@@@@@@@@@@@@@ @@@@@@@@@8LLLL1iiii1fLLLLG@@@@@@@@@@@@@@@@@@@ @@@@@@@@@GLLLLiiiiii1LLLLLGGG008@@@@@@@@@@@@@ @@@@@@@@@CLLLLt111ffLLLLLLLLL1i1fLG@@@@@@@@@@ @@@@@@@@@LLLffCLLLCLtffLLLLLLt11i;i1C@@@@@@@@ @@@@@@@@8LLfi1tfLLLti11fLLLLLCLLf1ii1C0@@@@@@ @@@@@@@@8LLfiiiitLLLLLLLLft1tfLLLLLLLffC@@@@@ @@@@@@@@8LLLLfffLLLf1i1LfiiiitLLLLf1i1tfC8@@@ @@@@@@@@@CLLffLCLL1iiifLtii1LLLLiiiiiiitiL@@@ @@@@@@@@@GLfii1fLL1ii1LLLLfLLLLL1ii1tt1fG@@@@ @@@@@@@@@8fLLtiifLLffLLLLLLLLfftfLfLLLLC@@@@@ @@@@@@@@@@LLLLffLLLLLLLLLLLLLf1itffL1ifG@@@@@ @@@@@@@@@@GLLLLLLLffLLLLLLLLffffLLLLLLf8@@@@@ @@@@@@@@@@@LLLLLLLLLLLLLLLLLfLffLLLLLLL@@@@@@ @@@@@@@@@@@0LLffffC8LfLLLLfC8080LLLLLC8@@@@@@ @@@@@@@@@@@@@88088@@80GCCG0@@@@@@88@@@@@@@@@@
You want to deploy a version 2.0 of kubeanimals, a green application:
Create a new deployment of a captainkube version.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kubeanimals
version: "2.0"
name: captainkube
spec:
replicas: 1
selector:
matchLabels:
app: kubeanimals
version: "2.0"
strategy: {}
template:
metadata:
labels:
app: kubeanimals
version: "2.0"
spec:
containers:
- image: r.deso.tech/whoami/whoami
name: captainkube
ports:
- containerPort: 80
resources: {}
env:
- name: NAME_APPLICATION
value: "captainkube"
kubectl apply -f kubeanimals-2.yaml
deployment.apps/captainkube created
kubectl get pod
NAME READY STATUS RESTARTS AGE LABELS captainkube-66bf6d98f8-tr9sp 1/1 Running 0 2m6s app=kubeanimals,pod-template-hash=66bf6d98f8,version=2.0 phippy-799b96769f-9jl2g 1/1 Running 0 13m app=kubeanimals,pod-template-hash=799b96769f,version=1.0
Now you have 2 version of applications, version 1.0 and 2.0 Your service still to use the selector version 1.0
kubectl get svc --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS kubeanimals LoadBalancer 10.102.194.9 10.10.99.121 80:30489/TCP 11m app=kubeanimals,version=1.0
Edit the file kubeanimals-svc.yaml
and change the version from 1.0
to 2.0
.
This will change the label and you will switch immediately from blue to green application.
vi kubeanimals-svc.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: kubeanimals
version: "2.0"
name: kubeanimals
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: kubeanimals
version: "2.0"
type: LoadBalancer
status:
loadBalancer: {}
Now apply again the kubeanimals svc:
kubectl apply -f kubeanimals-svc.yaml
service/kubeanimals configured
kubectl get svc --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS kubeanimals LoadBalancer 10.102.194.9 10.10.99.121 80:30489/TCP 15m app=kubeanimals,version=2.0
Now you have updated the kubeanimals to version 2.0
Try to curl
again on the LoadBalancer
service:
kubectl get svc kubeanimals --output jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'
10.10.99.121
curl
the IP obtained, you will see a CaptainKube application:
curl 10.10.99.121
@@@@@@@@@@@@@@@@@@@0GGG08@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@G1:. .,;L@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@L. . :t1... ;@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@1 ,.t0i:..,..0@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@i..G8fL,...C@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@CCG,..;1i,..t0C8@@@@@@@@@@@@@@@
@@@@@@@@@8LCCL. ..... .... 1GCCG@@@@@@@@@@@
@@@@@@@@@8fffLL1;:,,,,,,,,:itLfffG@@@@@@@@@@@
@@@@@@@@@@Cff11fLLfffffffLLLt1fLf8@@@@@@@@@@@
@@@@@@@@@@Gff,..:1fLfLLLLt;...tLL@@@@@@@@@@@@
@@@@@@@@@@CfL1.Li.,tLLLf:.;C:;LfL@@@@@@@@@@@@
@@@@@@@@@@LfL1,GC,,ft:if:.L0:;Lff0@@@@@@@@@@@
@@@@@@@@@8fffLtii1ft. iL1ii1LfLfC@@@@@@@@@@@
@@@@@@@@@0fLffLLLLLft;1ffLLLLffLfC@@@@@@@@@@@
@@@@@@@@@GffffffffffLLLffffffffffL@@@@@@@@@@@
@@@@@@@@@LfffLfffffffffffffffLffLf0@@@@@@@@@@
@@@@@@@@0fLffffLffffffffffffffffLfL@@@@@@@@@@
@@@@@@@@CfLLfffffffffffffffffffLLLf8@@@@@@@@@
@@@@@@@@0fffffLfffffffffffffffftffL@@@@@@@@@@
@@@@@@@@@CtfffffffffffffffffffLfft8@@@@@@@@@@
@@@@@@@@@CfLffffffffffffffffffffLf8@@@@@@@@@@
@@@@@@@@@GfLLLffffffffffffffffLLfL@@@@@@@@@@@
@@@@@@@@@@GfffffffLLLLLLLLffffffL8@@@@@@@@@@@
@@@@@@@@@@@8CLffttttttttttttffLG@@@@@@@@@@@@@
@@@@@@@@@@@@@@@8800GGGGG00088@@@@@@@@@@@@@@@@
Clean Up
Now delete the service and deployments you have created.
kubectl delete svc kubeanimals
service "kubeanimals" deleted
kubectl delete deploy captainkube
deployment.apps "captainkube" deleted
kubectl delete deploy phippy
deployment.apps "phippy" deleted
kubectl delete deploy web01
deployment.apps "web01" deleted