To install MongoDB, we're going to be using an open-source Kubernetes operator. Operator and MongoDB will be deployed in the same namespace. Let's start with it.
Give it a name MongoDB and also important to add a label monitoring equal to Prometheus. Prometheus will only monitor namespaces that contain this label.
Next, we need to create a custom resource definition for MongoDBCommunity. It extends Kubernetes and allows you to define a custom type that only be created by the corresponding operator. It will create a MongoDB cluster based on this definition and manage its lifecycle. Create crd.yaml file.
Since it will require Kubernetes API server access, we need to create some RBAC policies. RBAC is a role-based access control system for Kubernetes. Create rbac folder and corresponding files.
Then finally, the operator itself. It's going to be deployed as a simple deployment object. You can adjust a few parameters if you want, such as operator version, image repository, and others.
Now let's move to the terminal and apply all of these files. I assume that you already have Kubernetes provisioned and kubectl configured to talk to the cluster.
Let's check if the operator is running. Also, you can search logs for any errors.
kubectlgetpods-nmongodb
Install MongoDB on Kubernetes (Standalone/Single Replica)¶
There are a couple of ways to manage users in MongoDB. You can create users using the MongoDB operator or using the shell. I'll show you both ways along the way. Start with creating an admin user with the custom resource.
First, we need to create a secret with a password. In this case, admin123.
Now the main database configuration file. It's going to be a MongoDBComunity type. Then specify how many replicas do you want. If you use one member, the operator will create a standalone MongoDB instance. We will start with one and scale it up a little bit later. For now, only a single type is supported, which is ReplicaSet. Enterprise Operator supports various types, including sharded cluster. Community edition operator at this point only can deploy a cluster with multiple replicas only.
It may take a few seconds to create your first MongoDB instance. Let's make sure that the pod is running and passing all the health checks.
kubectlgetpods-nmongodb
You can find persistent volume claims; we have one for 38 gigs and the second for two gigs used for logging by the operator.
kubectlgetpvc-nmongodb
Conveniently, the operator will generate a secret with the credentials and connection strings. You can get it from the secret. You have a standard connection string and a DNS Seed List Connection String.
Now let's connect to the database. We are going to be using a mongosh shell. You can use the port forward command to be able to access MongoDB locally.
kubectlport-forwardmy-mongodb-027017-nmongodb
To connect, provide the username and a password and use localhost with the default port number.
Then apply it. In a few minutes operator will spin up two more replicas and join them to the cluster.
kubectlapply-fk8s/mongodb/database/mongodb.yaml
Now we have one primary instance and two replicas.
kubectlgetpods-nmongodb
Now we have one primary instance and two replicas. Since we still have the previous session, let's verify that replicas are up to date. You need to switch to the admin database first.
useadmin
Then, authenticate with admin credentials.
db.auth('admin-user','admin123')
If you run the status, you find all the members.
rs.status()
You can also check the replication if replicas are able to keep up with the primary.
rs.printSecondaryReplicationInfo()
At this point, we have the MongoDB cluster ready for use. For the following example, we will secure MongoDB with TLS, but first, we need to clean up and delete the current deployment and persistent volume claims with corresponding volumes.
To secure MongoDB, we will use a cert-manager that will help us to bootstrap and manage PKI Private Key Infrastructure inside the Kubernetes cluster.
There are a few ways to deploy it; one of them is to use a helm chart. Add jetstack helm repo.
helmrepoaddjetstack\https://charts.jetstack.io
Update index.
helmrepoupdate
Before deploying the cert-manager, I want to create Prometheus custom resources since we will use Prometheus to monitor all our components, including the certificates. You may get an error if you try to use apply since those files a huge. If you get an error, just use create instead of apply. It has to do with a limitation on the size of annotation.
kubectlcreate-fk8s/prometheus-operator/crds
Next, create a namespace, and don't forget to include label monitoring equal to prometheus. Otherwise, the cert-manager will be ignored by Prometheus.
To customize helm deployment, you can create a values file and override default variables. I want to include CRDs deployment as part of helm deployment. Then enable Prometheus monitoring. And define the service monitor object; this is a reason why we need to create Prometheus Operator CRDs first. Prometheus instance default must match Prometheus label as well.
Then let's bootstrap CA. Set isCA to true. Then the duration for the certificate, for CA, we usually use five years. Certificate object only will have a reference to the secret and will not contain any sensitive data.
Make sure that the CA certificate is ready, and by default, it will be located in the cert-manager namespace.
kubectlgetcertificate-ncert-manager
Next, we need to create a new Cluster Issuer based on the CA that we just generated. You don't have to create CA using the cert-manager. If your organization already has a certificate authority, you can simply import it as a secret and create this Cluster Issuer to sign new certificates using your existing CA.
Now, we are ready to issue a certificate for the MongoDB cluster. First, it will only be accessible within the Kubernetes cluster. CA is false here. Those certificates are automatically renewed by the cert-manager that allows us to use a shorter duration, such as 90 days. The important part here, you can either use a common name with a wildcard which I don't recommend, or define alternative names using the dnsNames section. They must match internal MongoDB DNS names.
Also, verify that it's ready. It will be placed in mongodb namespace with the corresponding secret. This secret will contain a CA, TLS cert, and a private key.
kubectlgetcertificate-nmongodb
To secure MongoDB with TLS you need to add a security section and provide the reference to the secret with certificates. Optionally, if you want to secure your existing database with TLS, you need to set optional to true. Since it's a new deployment, we don't need that.
We can try to connect to the database using the TLS. Since it does not have external access yet, we can only connect to MongoDB inside the Kubernetes cluster. SSH to the pod with mongodb shell. It can be an existing MongoDB instance.
kubectlexec-itmy-mongodb-0-cmongod--bash
Now use mongosh with TLS. Provide a CA file and a certificate. When you create a headless service in Kubernetes, it will automatically create SRV DNS record. That's why you can use DNS Seed List Connection Format.
Then the certificate. The only difference here is that this certificate needs to be valid for internal access as well as external. It should have two sets of DNS names. devopsbyexample.io is my public DNS domain that I will use to create external MongoDB access. We will use the same CA issuer.
Also, for MongoDB, we need to add one more section - replicaSetHorizon. It will allow access to the database from the Kubernetes cluster as well as outside of it.
To create external access, you can use NodePort, but the better approach would be to create a load balancer for each pod. Usually, cloud load balancers will charge you only based on the number of connections. This approach will work in most of the clouds. This example will be deployed in AWS, and only cloud-specific annotation is here to upgrade the load balancer to the new network LB. Under selector, you can target each individual pod of the MongoDB cluster. We need to create three load balancers, one for each pod.
We need to create DNS records using those load balancers. We have 3 LBs. Go to your DNS hosting and create DNS records. My domain is hosted with google domains. We need to create 3 CNAME records for each load balancer.