Frameworks & Libraries used :
Overview :
We decided to work on a proof-of-concept (POC) involving MQTT based data synchronisation during leisure. The ActiveMQ Apollo broker was hosted half-way across the globe on AWS Virginia (USA) instance while the MQTT clients were being accessed from Pune, India.
Initial probing had shown positive signs of going ahead with the POC. Our aim was to implement a cross-platform commons module to be used across projects which needed near real-time data synchronisation.
Implementation detail:
We first defined the message format in JSON with needed headers & payload fields. Each data operation was classified under one of four categories as follows :
GLOBAL_OP : Global audience, operational (temporary) data
LIMITED_OP : Limited audience, operational (temporary) data
GLOBAL_PR : Global audience, persistent data
LIMITED_PR : Limited audience, persistent data
If you're familiar with MQTT routing, you'd know that we can route messages based on topic hierarchy eg: Posting on SENSOR/THERMAL/US would route the message to all receivers who have subscribed to SENSOR/THERMAL/US as well as any wildcard subscribers such as SENSOR/+/US or SENSOR/#
The + wildcard provides subscription to a single level of hierarchy whereas the # wildcard provides subscription to the complete sub-tree of the topic.
So from the above example, If we subscribe to SENSOR/+/US we'll be able to receive messages posted on SENSOR/HUMIDITY/US, SENSOR/THERMAL/US and so on but will not be able to receive messages posted on SENSOR/HUMIDITY/IN
Whereas if we subscribe to SENSOR/# we'll be able to receive messages posted on SENSOR/HUMIDITY/US, SENSOR/THERMAL/US as well as SENSOR/HUMIDITY/IN
Accordingly, the topics were defined and messages were posted & subscribed on various topics to handle the above data operation types. Now that routing was handled, we could focus on the crux of the experiment - data syncing.
A server-less synchronisation meant implementing & handling P2P communications. With appropriate message formats and headers in place, we could then start sending & receiving needed synchronisation data in the payload. With a few tweaks & tricks, we got our PoC to work as initially expected.
Next phase of the prototype involved reducing the network costs by reducing the size of the payload. This could be achieved in three ways : Either limit the number of messages sent or reduce the size of the payload (compression) or both.
For the prototype, we decided to go with the second approach i.e. reduce the size of the payload using compression or similar alternatives. Thus came the urge to explore Apache Thrift & Google Protobuf to be used in place of JSON. Both turned out to be pretty awesome. We won't post any benchmarks as both were awesome in their own way.
- ActiveMQ Apollo (Broker)
- Paho (Android)
- MQTTKit (iOS)
- Google's Protocol Buffers (Protobuf).
- Apache Thrift.
Overview :
We decided to work on a proof-of-concept (POC) involving MQTT based data synchronisation during leisure. The ActiveMQ Apollo broker was hosted half-way across the globe on AWS Virginia (USA) instance while the MQTT clients were being accessed from Pune, India.
Initial probing had shown positive signs of going ahead with the POC. Our aim was to implement a cross-platform commons module to be used across projects which needed near real-time data synchronisation.
Implementation detail:
We first defined the message format in JSON with needed headers & payload fields. Each data operation was classified under one of four categories as follows :
GLOBAL_OP : Global audience, operational (temporary) data
LIMITED_OP : Limited audience, operational (temporary) data
GLOBAL_PR : Global audience, persistent data
LIMITED_PR : Limited audience, persistent data
If you're familiar with MQTT routing, you'd know that we can route messages based on topic hierarchy eg: Posting on SENSOR/THERMAL/US would route the message to all receivers who have subscribed to SENSOR/THERMAL/US as well as any wildcard subscribers such as SENSOR/+/US or SENSOR/#
The + wildcard provides subscription to a single level of hierarchy whereas the # wildcard provides subscription to the complete sub-tree of the topic.
So from the above example, If we subscribe to SENSOR/+/US we'll be able to receive messages posted on SENSOR/HUMIDITY/US, SENSOR/THERMAL/US and so on but will not be able to receive messages posted on SENSOR/HUMIDITY/IN
Whereas if we subscribe to SENSOR/# we'll be able to receive messages posted on SENSOR/HUMIDITY/US, SENSOR/THERMAL/US as well as SENSOR/HUMIDITY/IN
Accordingly, the topics were defined and messages were posted & subscribed on various topics to handle the above data operation types. Now that routing was handled, we could focus on the crux of the experiment - data syncing.
A server-less synchronisation meant implementing & handling P2P communications. With appropriate message formats and headers in place, we could then start sending & receiving needed synchronisation data in the payload. With a few tweaks & tricks, we got our PoC to work as initially expected.
Next phase of the prototype involved reducing the network costs by reducing the size of the payload. This could be achieved in three ways : Either limit the number of messages sent or reduce the size of the payload (compression) or both.
For the prototype, we decided to go with the second approach i.e. reduce the size of the payload using compression or similar alternatives. Thus came the urge to explore Apache Thrift & Google Protobuf to be used in place of JSON. Both turned out to be pretty awesome. We won't post any benchmarks as both were awesome in their own way.