OneLedger Technology Inc.1 as a company, is established to provide a universal protocol that enables interoperability across various blockchains. We aim to be the first truly interoperable blockchain, with a focus on scalability and fault tolerance.
We use Tendermint2 as our consensus engine. Tendermint provides us with a Byzantine Fault Tolerant State Machine that can withstand failure of 1/3rd of the machines in the network. Our unique approach helps us to easily facilitate both public permissionless and private permissioned blockchains. We recently conducted Load Testing as part of the ongoing effort to test the effectiveness of our public blockchain. This paper aims to discuss the Load Testing methodology and metrics in detail for our public blockchain.
Discuss the Load Testing methodology
Explain the Load Testing results
NEED FOR LOAD TESTING
OneLedger is built on top of the Tendermint consensus layer which uses a PBFT consensus model. OneLedger protocol benefits from it to build Delegated Proof of Stake (DPoS) model. Validators stake OLT in order to gain a chance to be a block proposer. This model is much faster than PoW3 model, where the participating nodes compete to solve a complex mathematical puzzle to become the block proposer. Tendermint goes through 4 stages4 (Propose, Pre-Vote, Pre-Commit and Commit) before each block is committed. There are many messages being exchanged among all the Validators in each of these stages. OneLedger is committed to providing the best possible experience for the community. In this regard, it is highly important for us to figure out how the network behaves under different load conditions. This helps us to be adequately prepared for future milestones such as the Mainnet.
LOAD TESTING METHODOLOGY
This load test is performed on the Google Cloud Platform. All the VM’s we used have 8 CPUs and 16 Gb RAM for the purpose of this load test. We have experimented with varying Number of Validators and the Mempool5.
The block size is set to 21 Mb, our transaction size is roughly 186 bytes for each transaction, which means a block could hold more than 110,000 transactions. As discussed above, Tendermint goes through 4 stages for each block, as part of its commit process. The time interval for each stage is set to the following values:
Figure 1: 4 stages of Tendermint (per block), as per OneLedger’s configuration
Propose – 3 seconds (Each Validator waits for 3 seconds maximum for a new block to be proposed)
Pre-Vote – 1 second (Each Validator waits for 1 second maximum for votes from other Validators)
Pre-Commit – 1 second (Each Validator waits for 1 second maximum for receiving all the Pre-Commits)
Commit – 1 second (Each Validator waits for 1 second after committing a block, which means the block time interval is minimum 1 second)
We started with 2 Validators and tested for Mempool with a number of transactions in the Mempool varying from 100, 1000, 5000, 10000, 50000 and 100000. We repeated the same test with 4, 8, 16, 32 and 64 Validators.
For each test, we have bombarded each Validator with 1000-4000 transactions per second simultaneously across all the Validators.
The number of validators is plotted on the X-axis and the Transactions per second (TPS) is plotted on the Y-axis. Each line represents Mempool configuration with a different value.
We have consistently hit TPS of more than 4000. The great thing here though is that this TPS is just with our main chain. With our side chain in place, we can have a TPS of 4000 for each and every side chain. Therefore, we have the capability of adding side chains to balance our load and thus our TPS would have no boundaries.
If we keep Mempool constant and increase the number of Validators, the TPS increases/stays roughly similar until the number of Validators is 16 and decreases post that. The reason for this behaviour is obvious. With more Validators, it takes more time for message exchange and to reach a consensus.
If we keep the number of Validators same and increase the mempool, TPS increases till the mempool is 50000 and decreases post that. The reason for this behavior is because, the bigger the mempool, the higher the number of transactions in a block. The higher the transactions in a block, the more time each Validator takes to verify them. Thus the Validators start timing out waiting for Votes from other Validators, and a new block gets proposed after the timeout.
During the whole load test, the CPU and memory usage was always under 40-50% in each of the nodes, meaning testing with smaller VMs should achieve similar results.
If you would like to learn more about OneLedger’s Load Testing results, feel free to contact us on our Telegram Dev Channel.
1 OneLedger Technology Inc. is referred to as “OneLedger” for all purposes moving forward in this paper