Java Spring Framework with MongoDB - What not to do!

Java Spring Framework with MongoDB - What not to do!

Frameworks & libraries evaluated:
Spring data
HibernateOGM
MongoDB java driver

First and foremost, you should select an ORM or database driver which has been widely used & tested by the developer community. If you're planning for an early production release, never go with any recently released framework or driver (If you do then you might come across some unexpected issues)
Our requirements needed a MongoDB driver which was flexible as well as which would provide agility in the development phase. We tried out Spring data which was pretty agile but we faced some issues with flexibility. HibernateOGM was then a nascent framework which lacked some features and so we couldn't go ahead with it.
Finally, we had decided to use the official MongoDB java driver. With some additional wrappers we were able to harness both flexibility as well as agility from the driver.

During the implementation with Spring Framework we came across a few realisations which are as follows :

1. Do not implement custom connection pool for MongoDB

=> Turns out, MongoDB has an built-in support for connection pool through it's MongoClient class with default value 5. It can be set to any desired value by adding parameter maxPoolSize to the MongoClient.

2. Do not implement a cache layer for MongoDB

=> MongoDB has built-in cache mechanism which differs with the storage engine. MMAPV1 storage engine uses all available free memory to cache recently used data whereas WiredTiger has a configurable cacheSize. Unlike relational databases, the data representation in the MongoDB database is the same as that in the application memory so a separate caching layer is not needed. The data is in a consumable form in the database itself whereas in relational databases, the data from database has to be transformed & processed in order to make it consumable for any application.

3. Avoid 32 bit versions of MongoDB

=> 32 bit version has a storage limit of 2GB whereas 64 bit version offers virtually limitless storage. Also, when switching between 32 & 64 bit versions, you may encounter some nasty issues at runtime. 

4. Do not span a transactional operation over more than one collection

=> If you're working on a module which needs any transactional operation like a relational database's "begin transaction" & "commit transaction" then the query should only be on a single collection. As transactions are not supported on MongoDB, we cannot ensure atomicity of the operation if more than one collection is involved in the whole operation. But MongoDB provides atomic updates on the document level so a simple transaction can be achieved if all collections in the transaction are re-modelled into a single collection using embedded documents. If embedding documents is not possible then you should take a look at the 2-phase commit workaround for MongoDB (https://docs.mongodb.com/manual/tutorial/perform-two-phase-commits)

In case of any suggestions, please share in the comments below.

Boston Byte Grabs a Spot in Clutch’s List of Top Software Developers in Massachusetts

Boston Byte is a collective of highly-skilled and highly-professional developers dedicated to solving your technological ...