Adding Persistent Storage to a Go App

Using Critical Stack to deploy a Go REST API application with data persistence

Getting Started

Pre-requisites:

  1. The previous Go REST API lab for Docker registry, Service, Ingress setup - this demo will just update the previous Docker image with new functionality

Overview

Update previous Go REST API lab to introduce data persistence.

Lifted from Making a RESTful JSON API in Go and adapted to Critical Stack. Then hacked on persistence using an inefficient flat file. This is just example code and not intended to teach proper Go coding in any form.

Learning steps prior to executing lab

The previous Go REST API lab didn’t persist data and wouldn’t scale beyond 1 node. Scale the deployment to 2 pods and then use a web browser or curl to access the hostname for your application. Do this several times. You should see the data inserted from the previous lab intermittently. This is because the 2nd pod doesn’t share the same data source.

K8s Storage

Quick Introduction to Storage in Kubernetes, lifted from Storage Volumes:

On-disk files in a Container are ephemeral, which presents some problems for non-trivial applications when running in Containers. First, when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state. Second, when running Containers together in a Pod it is often necessary to share files between those Containers. The Kubernetes Volume abstraction solves both of these problems.

A Kubernetes volume, has an explicit lifetime - the same as the Pod that encloses it. Consequently, a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts. Of course, when a Pod ceases to exist, the volume will cease to exist, too. Perhaps more importantly than this, Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).

Type of StorageData Storage Lifespan
Container filesystemContainer lifetime
VolumePod lifetime
Persistent volumeCluster lifetime

Steps for Building your RESTful Hello World App with Persistent Data

  1. Copy the Dockerfile and .go files from this repo into a new folder Development/Go/App3.

    cp ~/Documents/GitHub/Critical-Stack-Services/Internal/Feature\ Labs/Workloads/CS\ Go\ Demo\ REST\ Persistence/*.* .

    $ cd ~/Development
    $ mkdir -p go/app3
    $ cd go/app3
    $ cp  ~/Documents/GitHub/Critical-Stack-Services/Internal/Feature\ Labs/Workloads/CS\ Go\ Demo\ REST\ Persistence/* .
  2. As with previous lab, compile the REST API example code into hello-go.

    $ CGO_ENABLED=0 GOARCH=amd64 GOOS=linux go build -ldflags="-s -w" -a -o hello-go .
  3. As with previous lab, build, tag and push to a docker registry.

    docker build -t hello-go -f Dockerfile .
    docker tag hello-go jabbottc1/hello-go:0.0.3
    $ docker push jabbottc1/hello-go:0.0.3

Deploy your container into Critical Stack

  1. From the Crticial Stack UI, go to Data Center -> Storage -> Persistent Volume Claims
  2. Select Create Persistent Volume Claim. Copy the sample yaml code below, we will use this for our PV configuration. Change namespace to your namespace.

        kind: PersistentVolumeClaim
        apiVersion: v1
        metadata:
            name: hello-go-storage-claim
            namespace: development
        spec:
            accessModes:
                - ReadWriteOnce
            volumeMode: Filesystem
            resources:
                requests:
                    storage: 1Gi
  3. Save and Exit

  4. Next we need to edit the hello-go deployment. We need to modify two sections. First we need to update the tag so that we can pull down v0.0.3 and we also need to add the following description for a Persistent Volume Claim.

        apiVersion: apps/v1
        kind: Deployment
        metadata:
            name: hello-go-deployment
            namespace: development
        spec:
            selector:
                matchLabels:
                    app: hello-go
            replicas: 2
            template:
                metadata:
                    labels:
                        app: hello-go
                spec:
                    containers:
                    - name: hello-go
                        image: jabbottc1/hello-go:0.0.3
                        imagePullPolicy: Always
                        ports:
                        - containerPort: 8080
                        volumeMounts:
                            - name: hello-go-storage
                                mountPath: /app/data
                    volumes:
                        - name: hello-go-storage
                            persistentVolumeClaim:
                                claimName: hello-go-storage-claim
                    securityContext:
                        fsGroup: 1000
  5. Save and Exit

Testing your RESTful Hello World App with Persistent Storage

  1. Verify the app works by using the following command in a new terminal window. Alternatively, you could open a browser to https://<yourcnameforyourapp

    Basic test to see the Welcome message or the initial entries on container start up.

    curl -s https://<yourcnameforyourapp

    curl -s https://<yourcnameforyourapp/todos

    $ curl -s https://john-shared-hello-go.clouddqt.capitalone.com
    Welcome!
    $ curl -s https://john-shared-hello-go.clouddqt.capitalone.com/todos | python -m json.tool
    [
         {
            "completed": false,
            "due": "0001-01-01T00:00:00Z",
            "id": 1,
            "name": "Write presentation"
         },
         {
            "completed": false,
            "due": "0001-01-01T00:00:00Z",
            "id": 2,
            "name": "Host meetup
         }
    ]
  2. Insert new entry:

    curl -s -H "Content-Type: application/json" -d '{"name":"Time to get Pizza"}' https://<yourcnameforyourapp/todos
    curl -s -H "Content-Type: application/json" -d '{"name":"Time to get Pizza"}' https://john-shared-hello-go.clouddqt.capitalone.com/todos
    $ curl -s -H "Content-Type: application/json" -d '{"name":"Time to get Pizza"}' https://john-shared-    hello-go.clouddqt.capitalone.com/todos
    {"id":3,"name":"Time to get Pizza","completed":false,"due":"0001-01-01T00:00:00Z"}
    $
  3. Show all of the entries:

    curl -s https://<yourcnameforyourapp>/todos | python -m json.tool

    $ curl -s https://john-shared-hello-go.clouddqt.capitalone.com/todos | python -m json.tool
    [
         {
            "completed": false,
            "due": "0001-01-01T00:00:00Z",
            "id": 1,
            "name": "Write presentation"
         },
         {
            "completed": false,
            "due": "0001-01-01T00:00:00Z",
            "id": 2,
            "name": "Host meetup"
         },
         {
            "completed": false,
            "due": "0001-01-01T00:00:00Z",
            "id": 3,
            "name": "Time to get Pizza"
         }
    ]
  4. Now that we have verified our application is working we need to delete the container running our application and create a new one. We can do this easily by modifying the scale of the deployment.

  5. From the Critical Stack UI, Navigate to Data Center -> Workloads -> Deployments right-click on the deployment then scale

  6. Change the deployment scale from 1 to 0. This will force a termination of the pod running our application. Note you could also edit the deployment and change the replicas value to 0.

  7. Test your application to verify it is not running:

    curl -s https://<<yourcnameforyourapp>>/todos | python -m json.tool

    $ curl -s https://john-shared-hello-go.clouddqt.capitalone.com/todos | python -m json.tool
    No JSON object could be decoded
  8. Now scale the deployment back to 1. You can will see the desired, current, and available deployments change from 0 back to 1. Refresh if necessary

  9. Now that the deployment is running again, test to see if we still have the records stored as we did before we scaled the deployment.

    curl -s https://<yourcnameforyourapp>/todos | python -m json.tool

    $ curl -s https://john-shared-hello-go.clouddqt.capitalone.com/todos | python -m json.tool
    [
         {
            "completed": false,
            "due": "0001-01-01T00:00:00Z",
            "id": 1,
            "name": "Write presentation"
         },
         {
            "completed": false,
            "due": "0001-01-01T00:00:00Z",
            "id": 2,
            "name": "Host meetup"
         },
         {
            "completed": false,
            "due": "0001-01-01T00:00:00Z",
            "id": 3,
            "name": "Time to get Pizza"
         }
    ]

Conclusion

You have successfully deployed a Go REST API application to Critical Stack with persistent data. But is that enough? How do we ensure the application is working? With the next lab we will add health checks via liveness and readiness probes.

A Few notes about this application: The persistence in this lab utilizes EBS and only works across 1 AZ (specifically 1 node) and was done so for simplicity - this application doesn’t support spreading data across AZs redundantly. A better example of EBS across an entire region is Stateful Sets which shows application level support for persistent data instead of the stateless Deployment model. EFS is also another option for storage that could be combined with a stateless Deployment, but EFS provisioning requires a separate manual step so was excluded from this demonstration. A stateless Deployment talking to a back-end database outside of K8s is also another widely used pattern.