Sell-it homepage

Deploying Sell-it Locally

14 Dec 2023 - 10 min read

We're continuing from here, where we successfully set up the project again and registered a user.

In this chapter, my focus is on configuring a Kubernetes setup to deploy the app locally using Minikube. This involves a bit of over-engineering, as I mentioned in my first post about migrating Sell-it. I'm doing this for fun and to learn.

In a real-world scenario, the app could be deployed on Vercel for example, since we don't have many things. However, since I'm planning to decouple some of the API endpoints into microservices (again, just over-engineering), I prefer to manage all the infrastructure locally to avoid paying for it on a cloud provider.

Setup MongoDb

In a real production environment, I would prefer to host the production database directly on a cloud provider. However, for the sake of flexibility (using mongo:3.6), the current setup is sufficient.

These are the configuration files with the Objects declared for Kubernetes:

Service

apiVersion: v1
kind: Service
metadata: 
  name: sell-it-mongodb-service
  labels: 
    app: sell-it-mongodb
spec: 
  ports: 
    - name: mongodb
      port: 27017
      nodePort: 30332
  type: NodePort
  selector: 
    app: sell-it-mongodb

Deployment

apiVersion: apps/v1
kind: Deployment
metadata: 
  name: sell-it-mongodb-deployment
spec: 
  selector: 
    matchLabels: 
      app: sell-it-mongodb
  replicas: 1
  strategy: 
    type: RollingUpdate
    rollingUpdate: 
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  template: 
    metadata: 
      labels: 
        app: sell-it-mongodb
    spec: 
      containers: 
        - name: sell-it-mongodb
          image: mongo:3.6
          imagePullPolicy: Always
          ports: 
            - containerPort: 27017
              name: mongodb 
          volumeMounts: 
            - name: mongodb-persistent-storage
              mountPath: /data/db
          resources:
            limits:
              memory: 128Mi
              cpu: 500m
      volumes: 
        - name: mongodb-persistent-storage
          persistentVolumeClaim: 
            claimName: sell-it-mongodb-pvc
 

Pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata: 
  name: sell-it-mongodb-pvc
  labels: 
    app: sell-it-mongodb
spec: 
  accessModes: 
    - ReadWriteOnce
  resources: 
    requests: 
      storage: 5Gi

Setup server

Now it's time for the server, in this case we need to create a containerized version of our app to use in Kubernetes

āš ļø Git had to be installed for Bower installation

FROM node:8-alpine3.11 as base
 
RUN apk update && apk add --no-cache git
 
WORKDIR /app 
 
COPY package* .
COPY bower.json .
 
RUN npm ci --production && \
    npm run bower:install -- --allow-root && \ 
    apk del git && \
    rm -rf /var/cache/apk/*
 
COPY --chown=node:node . . 
 
USER node

And finally here are the Kubernetes config files:

Service

apiVersion: v1
kind: Service
metadata:
  name: sell-it-service
spec:
  selector:
    app: sell-it
  ports:
    - port: 80
      targetPort: 3000
  type: LoadBalancer

Deployment

The deployment is just a template that will be replaced by a generated one using envsubst, by this way I can set automatically the new IMAGE_TAG every time a build a new one.

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: sell-it-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sell-it
  template:
    metadata:
      labels:
        app: sell-it
    spec:
      containers:
      - name: node
        image: personal.local:5000/sell-it:${IMAGE_TAG}
        imagePullPolicy: IfNotPresent
        envFrom:
        - configMapRef:
            name: sell-it-configmap
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /healthz
            port: 3000
          periodSeconds: 120
          initialDelaySeconds: 30
        command: [sh, -c]
        args: ["cd /app && node index"]
 

Configmap

The configmap is just a template as well that will be replaced by a generated one containing the variables, I'm not adding the env variables as Secret.

apiVersion: v1
kind: ConfigMap
metadata:
  name: sell-it-configmap
data:
  PORT: "${PORT}"
  DB_URI: ${DB_URI}
  SECRET: ${SECRET}
  CLOUDINARY_API_KEY: "${CLOUDINARY_API_KEY}"
  CLOUDINARY_API_SECRET: ${CLOUDINARY_API_SECRET}
  CLOUD_NAME: ${CLOUD_NAME}
  UPLOAD_FOLDER: ${UPLOAD_FOLDER}

Additionally, there is a livenessProbe with a new endpoint I created. Ideally, this could be managed by some Kubernetes library instead of having the endpoint here, but for now, that's not a problem.

Some notes about the config files

For the image hosting I'm not using Dockerhub, but a local registry, you can setup yours follwing these steps, the only thing you must take into account is to have Minikube started using the minikube start --insecure-registry "your-insecure-registry ip and port", additional configuration might be needed, for example in my case I added in the /etc/hosts a custom domain personal.local and I had to added in my Docker engine config.

 "insecure-registries": [
    "personal.local:5000"
  ]

Deploy

Since I'm not deploying anywhere in the cloud, I can't use a proper CI/CD system to automate the necessary steps. Instead, I'm currently doing it manually. I will explore other solutions in the future, but sometimes the priority is to make it work before optimizing processes.

Makefile

To make things easier, I thought on creating a Makefile to automate the process by doing the following steps:

  • Build the image, tag it and push it to the local registry
  • Make use on envsubst to replace my .env.production variables to the placeholders of my Kubernetes files

Additionally there are some dev commands.

include resources/scripts/build.mk
 
SHELL := /bin/bash
K8S_DIR = resources/k8s/sell-it
 
install:
	@npm ci && npm run bower:install
 
dev: install
	@docker compose up -d && npm run dev
 
deploy: build push apply-yaml clean
 
apply-yaml:
	@echo "Applying ConfigMap and related resources..."
	@source .env.production && envsubst < "${K8S_DIR}/web/configmap.template.yaml" > "${K8S_DIR}/web/configmap.yaml"
	@envsubst < "${K8S_DIR}/web/deployment.template.yaml" > "${K8S_DIR}/web/deployment.yaml"
	@kubectl apply -f "${K8S_DIR}/web/configmap.yaml"
	@kubectl apply -f "${K8S_DIR}/web/deployment.yaml"
	@kubectl apply -f "${K8S_DIR}/web/service.yaml"
	@kubectl apply -f "${K8S_DIR}/mongodb"
 
clean:
	@echo "Cleaning up..."
	@rm "${K8S_DIR}/web/configmap.yaml"
	@rm "${K8S_DIR}/web/deployment.yaml"

And that's all, with this configuration we can run make deploy that will apply the new configuration to our Minikube cluster.

From this point there are other things we can do apart from optimizing the Makefile (there are some things like the building image process), the K8s files or the Dockerfile.

In addition I set a custom domain sell-it.localhost (the localhost is just to signal Chrome we are local and then use Secure options) to my /etc/hosts.

Finally, in order to access the site, I use minikube tunnel although you can use also minikube service <your-service>

Recap

In this chapter, my goal was to deploy the app and MongoDB in Minikube. The experience has been interesting and fun, and it has sparked my interest to delve deeper into Kubernetes and related technologies.

Here are the code changes

Next steps

Now, I'm planning on populating the application with some data to make it more realistic and fix some issues. After that, I'm considering whether to start creating a frontend, decoupling some parts of the API, or simulating a desire for a new feature in the existing code. I'll give it some thought.

Let's continue with the adventure! šŸš€