Adding Health Checks to a Go App

Using Critical Stack to deploy a Go REST API application with data persistence and health checks

Getting Started

Pre-requisites:

  1. Previous Go Demo for Docker registry, Service, Ingress setup - this demo will just update the previous Docker image with new functionality

Overview

Update previous lab to introduce liveness and readiness probe health checks.

Lifted from Making a RESTful JSON API in Go and adapted to Critical Stack. This is just example code and not intended to teach proper Go coding in any form.

Liveness and Readiness Probes

Quick Introduction into Heath Checks in Kubernetes, lifted from kubernetes.io

kubelet uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a Container in such a state can help to make the application more available despite bugs.

kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.

Many applications running for long periods of time eventually transition to broken states, and cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy such situations. This sounds like a hack, and perhaps it is, but once an application is in service, keeping it running with no user downtime is the most important priority. Instead of receiving a call at 3am to fix a broken service, the application can self-heal. Monitoring will report the pod restart, and we can investigate and remedy the failure during our day time.

Steps for Building Hello World App with Health Checks

  1. Copy the Dockerfile and .go files from this repo into a new folder Development/Go/App4.

    cp  ~/Documents/GitHub/Critical-Stack-Services/Internal/Feature\ Labs/Workloads/CS\ Go\ Demo\ REST\ Persistence/*.* .
      $ cd ~/Development
      $ mkdir -p go/app4
      $ cd go/app4
      $ cp  ~/Documents/GitHub/Critical-Stack-Services/Internal/Feature\ Labs/Workloads/CS\ Go\ Demo\ REST\ Persistence/* .
      $ ls -l
      total 968
      -rw-r--r--  1 mee639  staff     348 Apr 12 13:19 Dockerfile
      -rw-r--r--  1 mee639  staff    6580 Apr 24 13:34 README.md
      -rwxr-xr-x  1 mee639  staff     961 Apr 12 12:59 build.sh*
      -rw-r--r--  1 mee639  staff      92 Apr  4 13:02 error.go
      -rw-r--r--  1 mee639  staff    3089 Apr 22 16:20 handlers.go
      -rw-r--r--  1 mee639  staff    1955 Apr 22 11:07 hello-go-deployment.yaml
      -rw-r--r--  1 mee639  staff     339 Apr  4 13:02 logger.go
      -rw-r--r--  1 mee639  staff     135 Apr  4 13:02 main.go
      -rw-r--r--  1 mee639  staff  464030 Apr 22 11:08 probe1.png
      -rw-r--r--  1 mee639  staff  466803 Apr 22 11:08 probe2.png
      -rw-r--r--  1 mee639  staff    1644 Apr 12 22:26 repo.go
      -rw-r--r--  1 mee639  staff     396 Apr  4 13:02 router.go
      -rw-r--r--  1 mee639  staff     634 Apr 22 10:32 routes.go
      -rw-r--r--  1 mee639  staff    1483 Apr 23 11:07 test.sh
      -rw-r--r--  1 mee639  staff     211 Apr  4 13:02 todo.go
  2. As with previous lab, compile the REST API example code into hello-go.

    $ CGO_ENABLED=0 GOARCH=amd64 GOOS=linux go build -ldflags="-s -w" -a -o hello-go .
  3. As with previous lab, build a Docker image using the Dockerfile.

    docker build -t hello-go -f Dockerfile .
  4. Tag the new image with tag 0.0.4 using your image/repo/tag

    docker tag hello-go jabbottc1/hello-go:0.0.4
  5. Push to Artifactory

        $ docker push jabbottc1/hello-go:0.0.4
        The push refers to repository [docker.io/jabbottc1/hello-go]
        710ea3f743ad: Pushed
        7d65d9239ce0: Pushed
        de60bccc7d34: Pushed
        9c036f9520c9: Pushed
        12ddbeab57b4: Layer already exists
        2e164ec30f89: Layer already exists
        6fb629cd5c56: Layer already exists
        a20e949b8495: Layer already exists
        f3f30f0d9fff: Layer already exists
        a8167f4abe99: Layer already exists
        0a2acf92ba78: Layer already exists
        0.0.4: digest: sha256:79146b803f5c17774f27edf5a6c8fdb6a3302e5ecf6bf9fefb776ebfbe549d06 size: 2613
  6. Login to you Critical Stack Deployment. Select Data Center -> Workloads -> Deployments. Click on the gear icon and then edit

  7. Add the following readinessProbe and livenessProbe descriptions to your deployment under the spec -> containers section of your deployment:

      containers:
      - name: hello-go
        image: jabbottc1/hello-go:0.0.3
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
        volumeMounts:
          - name: hello-go-storage
            mountPath: /app/data
        readinessProbe:
          httpGet:
            path: /
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 2
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 2 
  8. Here is what the full yaml description will look like:

       apiVersion: apps/v1
       kind: Deployment
       metadata:
         name: hello-go-deployment
         namespace: development
       spec:
         selector:
           matchLabels:
             app: hello-go
         replicas: 2
         template:
           metadata:
             labels:
               app: hello-go
           spec:
             containers:
             - name: hello-go
               image: jabbottc1/hello-go:0.0.4
               imagePullPolicy: Always
               ports:
               - containerPort: 8080
               volumeMounts:
               - name: hello-go-storage
                 mountPath: /app/data
               readinessProbe:
                 httpGet:
                   path: /
                   port: 8080
                   scheme: HTTP
                 initialDelaySeconds: 15
                 timeoutSeconds: 1
                 periodSeconds: 10
                 successThreshold: 1
                 failureThreshold: 2
               livenessProbe:
                 httpGet:
                   path: /health
                   port: 8080
                   scheme: HTTP
                 initialDelaySeconds: 15
                 timeoutSeconds: 1
                 periodSeconds: 10
                 successThreshold: 1
                 failureThreshold: 2
             volumes:
               - name: hello-go-storage
                 persistentVolumeClaim:
                   claimName: hello-go-storage-claim
             securityContext:
               fsGroup: 1000
  9. Save and Exit

  10. The periodSeconds field specifies that a readiness probe will be preformed every 10 seconds. The initialDelaySeconds field is the delay before the performing the first probe.

  11. Scale our deployment from 1 replicas to 3. Notice from our deployment UI our current desired, current and available values as the containers are created but waiting to be available:

  1. For demonstration purposes, a route was added which simulates pod faiure. When this API is accessed, it will be sent to a pod in the cluster which will toggle that pod’s failure mode. Use curl or a browser to access this route.

    curl -s https://<yourcnameforyourapp>/forcefail

  2. then go to Data Center -> Workloads -> Pods to watch the liveness probe terminate the pod. K8s will start a new pod to replace the terminated instance. The other two pods are still serving content - no downtime.

Conclusion

This was a very simple demonstration of how liveness and readiness probes could be used to test and restart your containers.