MongoDB Deployment on Kubernetes cluster using vanilla manifest & helm chart

Irtiza
6 min readJun 21, 2019

--

In this story, I will discuss how to deploy MongoDB on Kubernetes cluster using vanilla manifest and helm chart. Also the issues I faced in each approach.

Assumptions

I am assuming that Kubernetes cluster is already deployed on AWS. I haven’t tried it on other cloud, but I think changing Storage Class according to cloud provider (GCE or AZURE) will do the job!

Pre Requisites

It is excellent if you know every nitty gritty about Kubernetes and helm charts but it is necessary to understand the concepts given below to better understand the deployment guidelines.

Deployment Guidelines

During the research on how to deploy MongoDB on Kubernetes cluster I found two approaches:

1- Vanilla Manifests

2- MongoDB Helm Chart

1- Vanilla Manifests

I found a guide that uses vanilla manifests for MongoDB deployment. I am not going to explain this approach in details because the link I provided is quite comprehensive. Although I will give a brief description of the issues that I faced when I used this approach.

Caveats

  • Scaling stateful sets will increase the number of persistent volume claim(PVC) but scaling down will not delete the PVC because it requires the users to copy the data and delete the PVC manually.
  • ReplicaSet members were not sharing data with each other because of invalid configuration. I resolved this issue by re-initializing the replication configuration. Use the command given below to do it:
rs.initiate({ 
_id : "rs0",
members: [
{ _id: 0, host: "mongo-0.mongo:27017" },
{ _id: 1, host: "mongo-1.mongo:27017" },
{ _id: 2, host: "mongo-2.mongo:27017"}
]
})

In the above configuration, we just need to assign each host an id, one of them will act as Primary while others act as Secondaries.

To check whether the configuration has been initialized successfully use the command given below in mongo shell of primary node:

$ rs.conf()# Output
{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "mongo-0.mongo:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "mongo-1.mongo:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "mongo-2.mongo:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : 60000,
"getLastErrorModes" : {

},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5ccfd17a79b0a49ce7ee22fa")
}
}

Although this fix will resolve the above issue but it is not a feasible solution because we need to do this manually, it will be a great headache when we increase the number of nodes in MongoDB replica set. A GitHub issue was created regarding this issue a lot of moons ago but it is still open.

  • A blog explaining other issues in this method of deployment.

I recommend to not use this method.

2- MongoDB Helm Chart

In this method, we will deploy MongoDB by using its helm chart.

  • First of all, we need to create a storage class. Storage class manifest is given below:
apiVersion: storage.k8s.io/v1beta1  
kind: StorageClass
metadata:
name: <add storage class name here>
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2

Create a storage class using the command given below:

$ sudo kubectl apply -f <storage-class-manifest-name>.yaml

To validate whether storage class is created or not use the command given below:

$ sudo kubectl get storageclasses# outputNAME                              PROVISIONER             AGE
<name-of-storage-class> kubernetes.io/aws-ebs 1d
  • As we are using MongoDB Helm Releases it requires Helm Operator to be deployed in the cluster.
  • Now we deploy MongoDB stateful sets using its helm chart:
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
name: mongodb-deployment
namespace: <add namespace name here>
spec:
releaseName: mongodb-deployment
chart:
repository: https://kubernetes-charts.storage.googleapis.com
name: mongodb
version: 5.17.0
values: # headless service configs
service:
clusterIP: None
port: 27017

# mongodb replica set configuration
replicaSet:
replicas:
secondary: 1
pdb:
minAvailable:
primary: 1
enabled: true
name: mongo

# mongodb k8s pods labels
podLabels:
role: mongo
environment: test
# mongodb persistence configurations
persistence:
mountPath: /bitnami/mongodb
enabled: true
storageClass: "<add storage class name you created above>"
accessMode:
- "ReadWriteOnce"
size: <size of each replica set>Gi

To deploy mongo helm release use the command given below:

$ sudo kubectl apply -f <mongodb-helm-release-name>.yaml -n <namespace-name>

replace the <namespace-name> with the namespace name in which you want to deploy MongoDB.

To check whether the stateful set is created or not use the command given below:

$ sudo kubectl get statefulsets -n <namespace-name># resultNAME                              DESIRED        CURRENT     AGE<release-name>-<service>-arbiter    1               1        1m<release-name>-<service>-primary    1               1        1m<release-name>-<service>-secondary  1               1        1m

To check whether pods are running:

$ sudo kubectl get pods -n <namespace-name># OutputNAME                              READY     STATUS   RESTARTS    AGE<release>-<service>-arbiter-0      1/1      Running      0       10m<release>-<service>-primary-0      1/1      Running      0       42m<relase>-<service>-secondary-0     1/1      Running      0       42m

Once we know that the stateful sets and pods are running, we try to access the database.

Copy the name of pod that has the primary keyword in it and run the command given below:

$ sudo kubectl -n <namespace-name> exec -it <pod-name> /bin/bash 

the above will create a session with the primary node of the MongoDB replica set, run the command given below to access mongo CLI:

$ mongo

To check MongoDB replica set have been configured correctly. Run the command given below in the mongo shell:

$ rs.conf()# Output{
"_id" : "mongo",
"version" : 3,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : <host>",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 5,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "<host>",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "<host>",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 0,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {

},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("XXXXX")
}
}

The output might change based on your configuration but it must include all the nodes that are part of the replica set.

Caveats

As we all know that the land of unicorns and lollipops doesn’t exist. There are some caveats that I want to share with you that are given below:

  • Scaling stateful sets will increase the number of persistent volume claim(PVC) but scaling down will not delete the PVC because it requires the users to copy the data and delete the PVC manually.
  • An issue was faced by me and it was causing me to not access MongoDB because I was not using the master/primary node of replica set to access the database. This issue was resolved by running the command given below on non-master/primary nodes:
$ rs.slaveOk()

Details of the above command can be found on this link!

Enjoy mongoing!

Do let me know if you find any issue is this guideline. Thank you

--

--

Irtiza
Irtiza

Responses (2)