Data replication is a highly complex topic that crosses as one of the long-standing issues in the distributed systems(and it will continue to be). Achieving low latency is really hard for data replication, also for operations over the distributed data has a fluctuating trend based on both network medium, replication protocol, transport, algorithm, and network topology.
Apart from the difficulties of making a distributed system main problem arises when systems are latency oriented and handles highly critical data.
We realize everything slowly but steadily…
Things were different…
Data replication strategies are used every day for years. These strategies are not latency critical and mostly used for RDBMS replication systems like Postgres BDR, KV replications via sharded reads with Sentinels and many more like these...
Since these strategies relies mostly on the reliability/recoverability perspective of the systems (in addition to partition tolerance), current volatile memory based systems have different development direction than ordinary *DBMS systems. Surely NoSQL systems are on the rise and they need to excel at how they operate on data. That's true too.
Like every time we make a tradeoff in computer science when we include data also into the mix we need to consider how our workloads are. Current workloads are more oriented and tailored towards for the custom use cases. Everybody has a different use case and approaches are completely different from game industry to e-commerce sector.
Pillars of the future
I have worked in the security industry for 3+ years. That security industry experience starts way back in times where there was no blockchain. You see how other vulnerability researchers are doing their jobs and how system codes are getting hacked and given back to you in matter of seconds. I've learned a lot meanwhile I was writing CXX code too. Pain was real…
Meanwhile Rust was developed and became a de-facto systems programming language for the masses. That is nice. We've started with the mantra "Pursuing the trifecta" and we continued… ¹
Day to day I am working with data processing systems, I am mostly wrapping my head around with hardcore problems of ddata (short for Distributed Data from now on).
Pillars that I mention are the pillars that enables me/others to do their jobs and improve their systems. So I started to build the pillars for everyone.
Ionic order: Artillery
I am working on Artillery. A cluster management & distributed data management library. It is currently doing zeroconf SD and AP cluster instantiation. That means it is already assembling a cluster which node statuses can be notified and efficiently work at the medium scale datacenter local operation. That's beautiful but how about the replication?
I was working on the replication for the last two weeks where I am very much fond of couple of methods implementing in Rust is my primary goal for the reasons of trifecta.
For the design of low latency system I have decided to use CRAQ replication system² which is improvement over the general CR based systems. CRAQ is for OBS systems (Object based storage) and it is enabling high performance, low latency replication scheme over the existing nodes. Major improvements over the CR replication is the apportioned queries which are basically workloads that scattered all over the cluster, which might even span across zones, across datacenters all over the world. These are the systems like how Tanenbaum classifies as geographically dispersed system/s.
Artillery's ddata module is getting baked nowadays, and CRAQ was the fuel to burn to the peak performance.
CRAQ works like a bunch of nodes weaved together as a chain (which forms the cluster, which also can span to multiple regions). This is how chain replication works in general. Advancement of this type of replication is that most of the time this workload is designed for data read/consumption not for writes/dissemination. That's why that long back story was written to envision where I want to go. You can imagine the read operations hit every node as a Kanagawa hitting to the coastline.
In a nutshell this is how CRAQ works:
Where every node is aware of other nodes and except tail and head, all the successor nodes have bidi connection to predecessors and their successors. Tail is the one and the only one that mostly works on the versioning and guarantees.
CRAQ also has three modes of consistency (by default it defaults to Strong, that is why CRAQ is made) which can be listed as Strong, Eventual, Eventual Maximum Version Bounded. We are not going to take a look at to these approaches but main difference between them is dirty object dissemination through the chain.
At Artillery I have defined the protocol for transport, messaging and for the sake of future proofness, I have enabled versioned strict message transport.
After hard work for the Rust implementation and adapting the secure, reliable implementation mindset, and heavy testing for performance under the expected message rate (even more too, I am not performance crazy, but if a wonderful approach like this exists I won't hold myself to give more load onto it, so I gave it), I have finalized the draft of first implementation.
Take a little bit of grain of salt here. Like every post of benchmark, performance or low tail latency post or section you read. In here, we are going to do something different. Our benchmarks won't be asynchronous at the user side. We are not going to include an executor or any threading at the client side so we don't include any latency of it into our loop that will read the data while we are benchmarking. So whole benchmarking intentionally not concurrent and, all the operations per second calculation at the Y axis is pure blocking. That said, any async code or multithreading that is working with this algorithm expected to be ~2.3x faster than the aforementioned approach.
Below you can see ops per sec as Y axis and connected clients as X axis for both casual CR and apportioned query based CR. Hover over to data points to see their exact results.
Still this is very early sneak peek preview of implementation for DData in Artillery. After all everything it is in flexible mode. You can alter your use with a fault tolerant executor for reliability, speed oriented executor, or no executor at all. It isn't tied to anything specific. Pure Rust code. That said, let's take a look into the speedup gained from DC spanning reads over the assembled nodes vs single node read CR:
And finally, pure, direct correlation over the median timing of the operation (in µs) and their respective counts were like below (lower is better):
Currently, DData code is under construction. Soon I will release and publish a new version of Artillery with both core and ddata components. Until then feel free to rise your questions, share your comments and support me in Bastion project's Discord server. Please consider supporting me via GitHub Sponsors if you like my work.
1- If you are long time around the Rust, you will know what these are. They are safe, performant, and concurrent execution.
2- Terrace, J., & Freedman, M. J. (2019). Object storage on CRAQ: High-throughput chain replication for read-mostly workloads. Proceedings of the 2009 USENIX Annual Technical Conference, 143–158.