Published
  • Aug 10, 2021
  • 12 MIN
Author
Category Tech
Topics
Share
Get Started With Magnolia Helm Chart 656x400
Getting Started With Helm Chart

Getting Started With Helm Chart

Kubernetes is becoming the standard orchestration system for automated deployment, scaling, and management of containerized applications. When working with Kubernetes you are likely to deploy objects, such as deployments, volumes, services, and config maps many times during an application’s lifecycle. Helm is a tool to standardize the deployment of applications and their packaging while providing flexibility and configurability.

In my previous article “Building a Continuous Delivery Pipeline with GitHub and Helm to deploy Magnolia to Kubernetes” we’ve explored deploying a containerized Magnolia application to a Kubernetes (K8S) cluster. In this article, we’ll look at Helm in more detail.

Helm describes itself as an “Application Package Manager for Kubernetes”, but it can do so much more than this description might suggest: Helm manages applications that run in a Kubernetes cluster and coordinates their download, installation, deployment, upgrade, and deletion.

In the world of Helm, Helm Charts define applications as a collection of Kubernetes resources using YAML configuration files and templates. But a chart does not only consist of metadata that describes the application, it also manages the infrastructure to operate the application in accordance with the Kubernetes primitives.

Once an instance of a chart is installed in the cluster, it’s called a ‘release’. One chart can be installed into the same cluster multiple times. Each time it is installed, a new release is created.

Installing and configuring Helm 3

To install Helm v3.x, run the following commands:

Java

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh

To inspect what Helm can do, run helm --help.

To create a scaffold of a Helm chart including template files, run $ helm create my-first-chart.

Let’s now examine a real chart, the Magnolia Helm chart.

Magnolia Helm chart

I will reuse the Magnolia Helm chart from my previous article. It is located under the helm-chart/ directory of the magnolia-docker repository in GitHub and has the below structure:

Java

.
├── Chart.yaml
├── templates
│   ├── configmap.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── statefulset.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml
2 directories, 8 files

Chart.yaml defines the chart and values.yaml specifies the values to be used during its deployment.

Chart.yaml

Java

apiVersion: v2
name: magnolia
description: Deploy a Basic Magnolia CMS container
type: application
version: 0.1.0
appVersion: 6.2.3

The first part includes apiVersion, a mandatory parameter that specifies the API version the chart is using, the name of the chart, and its description. The next section describes the chart type - by default an application or alternatively a library, the chart’s version, and appVersion, the application version, which you should increment as you make changes.

values.yaml

Template files fetch deployment information from values.yaml. To customize your Helm chart, you can either edit the existing file or create a new one.

Java

replicaCount: 1
image:
  repository: ghcr.io/magnolia-sre/magnolia-docker/magnolia-docker
  pullPolicy: Always
  tag: "latest"
service:
  name: http
  port: 80
  targetPort: 8080
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
  path: /
  authorHost:
github-magnolia-docker-author.experimentation.magnolia-cloud.com
  publicHost: github-magnolia-docker-public.experimentation.magnolia-cloud.com
  tls:
    - secretName: github-magnolia-docker-author-tls
      hosts:
        - github-magnolia-docker-author.experimentation.magnolia-cloud.com
    - secretName: github-magnolia-docker-public-tls
      hosts:
        - github-magnolia-docker-public.experimentation.magnolia-cloud.com
resources:
  limits:
    memory: 1000Mi
  requests:
    cpu: 500m
    memory: 1000Mi
liveness:
  httpGet:
    path: /.rest/status
    port: http
  timeoutSeconds: 4
  periodSeconds: 5
  failureThreshold: 3
  initialDelaySeconds: 90
readiness:
  httpGet:
    path: /.rest/status
    port: http
  timeoutSeconds: 4
  periodSeconds: 6
  failureThreshold: 3
  initialDelaySeconds: 90
env:
  author:
    - name: JAVA_OPTS
      value: >-
        -Dmagnolia.bootstrap.authorInstance=true
        -Dmagnolia.update.auto=true
        -Dmagnolia.home=/opt/magnolia
  public:
    - name: JAVA_OPTS
      value: >-
        -Dmagnolia.bootstrap.authorInstance=false
        -Dmagnolia.update.auto=true
        -Dmagnolia.home=/opt/magnolia

The above file defines some important parameters for our deployments:

  • replicaCount: number of replicas for author and public pods, the default is 1

  • image: container image repository and tag

  • service: source and target port of the exposed service

  • ingress: hostnames, routing rules, and TLS termination for the application

  • resources:limits and resources:requests: resources and resource limits for the application

  • liveness and readiness: probes that K8S uses to determine if the application is ready to accept requests or needs to be restarted

  • Custom configurations: other application-specific configurations, for example, JVM OPTS

Templates

The most important ingredient of a chart is the templates/ directory. It holds the application’s configuration files that will be deployed to the cluster. Magnolia’s templates/ directory contains configmap.yaml, ingress.yaml, service.yaml, and statefulset.yaml, as well as a test directory with a connection test for the application.

Building a Continuous Delivery Pipeline with GitHub and Helm to deploy Magnolia to Kubernetes

Do you want to set up a CD pipeline to deploy a containerized application to Kubernetes? Learn how to use GitHub and Helm using the Magnolia application as an example.

Workload

A workload is an application running in a Kubernetes cluster. Each workload is made up of a set of pods, where each pod is a set of containers.

In reverse: One or multiple containers make a pod; one or multiple pods make a workload.

Workloads can be exposed as separate services, while easily interacting with each other via cluster-internal DNS. They also have separate data persistence layers.

StatefulSet

StatefulSet and Deployment are controller objects in Kubernetes. While Deployment applies to stateless applications where all instances are interchangeable, StatefulSet instances are not interchangeable. StatefulSet also provides guarantees about the ordering and uniqueness of its pods.

A pod in StatefulSet has its own sticky identity. It is named using an index, for example, pod-0 and pod-1. Each pod can be addressed individually and keeps its name after a restart. It has its own persistent volumes and database layer, too.

StatefulSet suits Magnolia’s intercommunication model as each instance needs its own data persistence layer.

The below excerpt describes the typical structure of a StatefulSet including important parameters such as the replica count, container image and ports, and liveness and readiness probes:

Java

apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: {{ include "magnolia.fullname" . }}-public
 labels:
   {{- include "magnolia.labels" . | nindent 4 }}-public
spec:
 replicas: {{ .Values.replicaCount }}
 selector:
   matchLabels:
     {{- include "magnolia.selectorLabels" . | nindent 6 }}-public
 template:
   metadata:
   {{- with .Values.podAnnotations }}
     annotations:
       {{- toYaml . | nindent 8 }}
   {{- end }}
     labels:
       {{- include "magnolia.selectorLabels" . | nindent 8 }}-public
   spec:
     containers:
       - name: {{ .Chart.Name }}-public
         image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
         imagePullPolicy: {{ .Values.image.pullPolicy }}
         env:
           {{- toYaml .Values.env.public | nindent 12 }}
         ports:
           - name: http
             containerPort: {{ .Values.service.targetPort }}
             protocol: TCP
         livenessProbe:
           {{- toYaml .Values.liveness | nindent 12 }}
         readinessProbe:
           {{- toYaml .Values.readiness | nindent 12 }}
         startupProbe:
           {{- toYaml .Values.startupProbe | nindent 12 }}

service.yaml

Service configures network access to a set of pods from within and from outside the cluster. Unlike ephemeral pods, a service has a name and a unique IP address, clusterIP. Its IP never changes unless it is explicitly destroyed.

Below you find an example of the service representing the Magnolia Public pod. It defines the pod’s selector using a set of pod labels, as well as the port and protocol used between the service and the underlying pods:

Java

apiVersion: v1
kind: Service
metadata:
 name: {{ include "magnolia.fullname" . }}-public
 labels:
   {{- include "magnolia.labels" . | nindent 4 }}-public
spec:
 ports:
   - port: {{ .Values.service.port }}
     targetPort: {{ .Values.service.targetPort }}
     protocol: TCP
     name: {{ .Values.service.name }}-public
 selector:
   {{- include "magnolia.selectorLabels" . | nindent 4 }}-public

ingress.yaml

ingress is an object that allows access to a Kubernetes service from outside the cluster. It defines and consolidates routing rules to manage external users' access to the service, typically via HTTPS/HTTP.

Note: In order to fulfill Ingress objects, you need an Ingress controller in your cluster, for example, NGINX Ingress Controller.

Java

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
 name: {{ $fullName }}
 labels:
{{ include "magnolia.labels" . | indent 4 }}
 {{- with .Values.ingress.annotations }}
 annotations:
   {{- toYaml . | nindent 4 }}
 {{- end }}
spec:
{{- if .Values.ingress.tls }}
 tls:
 {{- range .Values.ingress.tls }}
   - hosts:
     {{- range .hosts }}
       - {{ . | quote }}
     {{- end }}
     secretName: {{ .secretName }}
 {{- end }}
{{- end }}
 rules:
   - host: {{ .Values.ingress.authorHost }}
     http:
       paths:
         - path: /
           backend:
             serviceName: {{ $fullName }}-author
             servicePort: http-author
   - host: {{ .Values.ingress.publicHost }}
     http:
       paths:
         - path: /
           backend:
             serviceName: {{ $fullName }}-public
             servicePort: http-public

The above ingress object includes some important attributes:

  • tls: TLS offloading, uses certificates from Kubernetes secret, each certificate can be associated with a list of hosts.

  • rules: traffic routing to the backend service, each rule can be defined for a specific hostname, list of paths, and backend as a combination of a service and port.

configmap.yaml

ConfigMap allows you to decouple an environment-specific configuration from pods and containers. It stores data as key-value pairs that can be consumed in other places. For example, a config map can be referenced as an environment variable, or used as a pod volume that is mounted to containers.

Java

apiVersion: v1
kind: ConfigMap
metadata:
 name: {{ template "magnolia.fullname" . }}
 labels:
   app: {{ template "magnolia.name" . }}
   chart: {{ template "magnolia.chart" . }}
   release: {{ .Release.Name }}
   heritage: {{ .Release.Service }}
data:
 magnolia-cloud.decorations.publishing-core.config.yaml: |-
   receivers: !override
     public0:
       url: https://{{ .Values.ingress.publicHost }}

For example, this ConfigMap is referenced in statefulset.yaml as a pod volume:

Java

containers:
  - name: {{ .Chart.Name }}-author
    …
    volumeMounts:
      - name: magnolia-home
        mountPath: /opt/magnolia
 volumes:
  - name: mounted-config
    configMap:
      name: {{ template "magnolia.fullname" . }}

In the container, the config file magnolia-cloud.decorations.publishing-core.config.yaml is mounted under the /opt/magnolia/ directory.

Named Templates

The Magnolia templates leverage named templates using a syntax like {{- include "magnolia.labels" }}. A named template is a Go template that is defined in a file and given a name. Once defined in _helpers.tpl. , named templates can be used in other templates, avoiding boilerplates and repeated code.

Chart Syntax

When developing a Helm chart I recommend you run it through the linter to ensure your templates are well-formed and follow best practices.

Run the helm lint command to see the linter in action:

Java

$ helm lint ./helm-chart/
==> Linting ./helm-chart/
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, 0 chart(s) failed

To verify that all templates are defined as expected, you can render chart templates locally and display the output using the template command:

Java

$ helm template ./helm-chart/

Deploy a Helm chart to the cluster

Now, let’s get our hands dirty with a deployment.

Creating a Kubernetes cluster

We will use a minikube cluster for our test deployment. You can refer to https://minikube.sigs.k8s.io/docs/start/ for installation instructions.

Once you installed minikube on your machine, you can start your cluster with a specific Kubernetes version:

Java

$ minikube start  --kubernetes-version=1.16.0
😄  minikube v1.6.2 on Darwin 10.14.5
✨  Automatically selected the 'virtualbox' driver (alternates: [])
🔥  Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.16.0 on Docker '19.03.5' ...
💾  Downloading kubeadm v1.16.0
💾  Downloading kubelet v1.16.0
🚜  Pulling images ...
🚀  Launching Kubernetes ... 
⌛  Waiting for cluster to come online ...
🏄  Done! kubectl is now configured to use "minikube"

Installing the Magnolia chart

Install the Magnolia chart from your local Git repository and check the release:

Java

$ helm install test-mgnl-chart ./helm-chart/
NAME: test-mgnl-chart
LAST DEPLOYED: Fri Jan 15 16:56:42 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
$ helm list
NAME                      NAMESPACE    REVISION    UPDATED                                    STATUS      CHART             APP VERSION
test-mgnl-chart           default      1           2021-01-15 16:56:42.981924 +0700 +07       deployed    magnolia-0.1.0    6.2.3      

Access the application

Use the port-forward command to forward a local port to the service port of the Magnolia author instance, for example:

Java

$ kubectl port-forward svc/test-mgnl-chart-magnolia-author 8080:80

You can now access the Magnolia application at http://localhost:8080.

Next Steps

We’ve explored the basics of the Magnolia Helm chart and deployed the chart in a local cluster. There’s much more you can do from here, for example, making modifications to the chart templates, creating your own values.yaml file, configuring Ingress using an Ingress controller, and deploying the chart to your own cluster.

About the Author

Khiem Do Hoang Senior Site Reliability Engineer at Magnolia

Khiem works on Magnolia’s Site Reliability Engineering (SRE) team. As an SRE he helps to ensure that Magnolia deploys smoothly and reliably on cloud infrastructure. He is involved in the design and implementation of automation processes, CI/CD, Infrastructure as Code (IaC), monitoring, and logging. Khiem is also interested in designing systems to take advantage of the automation and scalability of cloud-native, microservice-oriented, and containerized environments using technology such as Docker and Kubernetes.

Read more

Magnolia Newsletter

Get our newest blog posts, white papers, and event updates right to your inbox.

b2b-commerce-bg-blog-nl