Skip to main content

Replication and transactional guarantee in MongoDB


One of the projects I am working on is using MongoDB as the database solution. And the project makes use of the nifty ORM mongoose to do the heavy lifting of data orchestration.

It was high time I implemented transactions to the equation but because of a time crunch I was not able to start with one and the situation merely demands it at times.

But come with an architectural change and the way the project was heading it was high time I implemented transactions by using MongoDB.


According to MongoDB documentation, transactions are used when the situation requires, “atomicity of reads and writes to multiple documents (in single or multiple collections)”. MongoDB supports multi-document transactions. With distributed transactions, transactions can be used across multiple operations, collections, databases, documents, and shards.


Now the piece of code to implement the same was pretty straightforward.



// For a replica set, include the replica set name and a seedlist of the members in the URI string; e.g.

// const uri = 'mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017/?replicaSet=myRepl'

// For a sharded cluster, connect to the mongos instances; e.g.

// const uri = 'mongodb://mongos0.example.com:27017,mongos1.example.com:27017/'


const client = new MongoClient(uri);

await client.connect();


// Prereq: Create collections.


await client

.db('mydb1')

.collection('foo')

.insertOne({ abc: 0 }, { writeConcern: { w: 'majority' } });


await client

.db('mydb2')

.collection('bar')

.insertOne({ xyz: 0 }, { writeConcern: { w: 'majority' } });


// Step 1: Start a Client Session

const session = client.startSession();


// Step 2: Optional. Define options to use for the transaction

const transactionOptions = {

readPreference: 'primary',

readConcern: { level: 'local' },

writeConcern: { w: 'majority' }

};


// Step 3: Use withTransaction to start a transaction, execute the callback, and commit (or abort on error)

// Note: The callback for withTransaction MUST be async and/or return a Promise.

try {

await session.withTransaction(async () => {

const coll1 = client.db('mydb1').collection('foo');

const coll2 = client.db('mydb2').collection('bar');


// Important:: You must pass the session to the operations


await coll1.insertOne({ abc: 1 }, { session });

await coll2.insertOne({ xyz: 999 }, { session });

}, transactionOptions);

} finally {

await session.endSession();

await client.close();

}


The only catch is here how do you go ahead and change the replica set configuration on MongoDB because that is something you need to mess with in your MongoDB configuration file.



This is a blog post that can be a reference to those who need to try it out, and how I set up a replica set for the transactional guarantee in MongoDB.


1. Stop the MongoDB instance that is running

2. Find the configuration file for MongoDB, in Linux systems this will be probably at etc/mongod.conf

3. Add the following lines to the replication section of the configuration file

 oplogSizeMB: 2000

 replSetName: <replica-set-name>

 enableMajorityReadConcern: false

you can read about oplog here

One thing to add to the configuration is the possibility of a key file. In case you have a username and password associated with your connection (and most importantly you need it!) you need a key file generated and point the path of the key file to the configuration.

4. Once the configuration is changed restart the mongod service

5. Initialize the replica set using rs.initiate() command inside the mongo shell environment. Make sure the host value is correct under the members.

rs.initiate(

{

_id: "myReplSet",

version: 1,

members: [

{ _id: 0, host : "mongodb0.example.net:27017" },

{ _id: 1, host : "mongodb1.example.net:27017" },

{ _id: 2, host : "mongodb2.example.net:27017" }

]

}

)

6. In the connection string make sure to add the replica set name that you have in the initialization process as well on the configuration file

7. If you are using Compass to view the data make sure to mark the read preferences as primary or secondary.


Once the replication and transactions are ready to use, you can create a useful closure with the session arguments inside the callback to make sure you re-use transactions when you need them.


import mongoose, { ClientSession } from 'mongoose';


type TransactionCallback<T = void> = (session: ClientSession) => Promise<T>;


export const runInTransaction = async <T = void>(

callback: TransactionCallback<T>

) => {

const session: ClientSession = await mongoose.startSession();


session.startTransaction();


try {

await callback(session);

// Commit the changes

await session.commitTransaction();

} catch (error) {

// Rollback any changes made in the database

await session.abortTransaction();

// logging the error

console.error({ error });

// Rethrow the error

throw error;

} finally {

// Ending the session

session.endSession();

}

};


I hope this gives a general idea of transactions and setting up a replication set in MongoDB.


References


https://www.mongodb.com/docs/v4.4/core/transactions/

https://www.mongodb.com/docs/v4.4/replication/

https://www.mongodb.com/docs/v4.4/administration/replica-set-deployment/


Comments

Popular posts from this blog

AI. Will it replace us or...?

AI!! The buzzword is too hot in the market nowadays. Do you have a technical product or an idea? If it doesn't have AI in it, then chances are it's not going to be sold like hot cakes. That is how things have changed lately. It is no wonder why me and my colleagues at Gelato want to see what AI can do in a niche department like customer support and service. And that is exactly what we did. For a company like Gelato, which is in the market for production-on-demand, there are a lot of customer questions you need to answer. It can be related to products, queries about shipping and pricing, and so on and so forth. Thus, our customer support team is always happy to help with these recurring questions. Let's take one example. A customer asked us, "Do you ship to Norway?" Now that is an easy question to answer if you have the knowledge written somewhere where you could refer to it and say, "Yes! As a matter of fact, we do." Following the same thread, the next q

A smoooooth operation....

 Backward compatibility...   A word that I used to hear when I started my career. You design your APIs with backward compatibility in mind, don't break anything when you are upgrading, think about this, think about that etc. Well, those teachings from my previous mentors didn't go in vain, as I made a fundamental change in how we report problems @  Gelato .    You see recently @  Gelato , the CS (Customer Support) team moved from A ticketing management system to B ticketing management system, which is a monumental task for all the people involved in the CS team. Even though the fundamental concept remains the same the places, the attributes the concepts, and the naming of different attributes all change if you have this transition. And thus it was a big change for the whole company.    After the decision was taken, the first step was to create a well-written transition document, which the good folks at the CS team tackled. Special thanks to  Bartosz ,  Kyle  and  Anastasiia  fo

My experience with the Golden signals

In June 2022, I embarked on a quest for a new job opportunity. Fortunately, this endeavor began just before the global job market experienced a significant downturn. I must admit, I faced my fair share of rejections during this period, but I also had an epiphany. It became evident that there was so much more to learn and understand in the world of technology. Coming from a small service-based company, I had encountered limitations in terms of how much I could learn on the job. However, during interviews and conversations with senior developers, I gained valuable insights into the architectural and technical decisions made by teams in various companies. One such company that left a lasting impression on me was Delivery Hero. Their technical blog posts provided a wealth of information, especially for someone like me, transitioning from a smaller company where projects had minimal daily active users compared to the scale of Delivery Hero. One particular blog post that caught my attention