Blockchain scalability has always been a heated topic. Nearly every blockchain network touts high transactions per second (TPS) as a key selling point and a testament to their scalability. However, TPS is not a valid metric to compare blockchain networks, making it a challenge to evaluate their relative performance. Moreover, big TPS numbers usually come at a cost, which poses the question: Do these networks actually scale, or do they only increase their throughput?
While a useful measure of transaction speed, relying on TPS alone to measure scalability misses the full picture. To accurately evaluate blockchain scalability, we must examine the challenge with relying on TPS to define blockchain scalability, the tradeoffs involved with scaling blockchains, and why L2 validity rollups have emerged as the ultimate blockchain scalability solution.
Not all transactions are made equal
First, we need to establish our assertion that the simple and convenient metric of TPS is not an accurate measure of blockchain scalability.
To compensate nodes for executing transactions (and to deter users from spamming the network with unnecessary computation), blockchains charge a fee proportional to the computational burden imposed on the blockchain. In Ethereum, the complexity of the computational burden is measured in gas. Because gas is a very convenient measure of transaction complexity, the term will be used throughout this article for non-Ethereum blockchains as well, even though it is typically Ethereum-specific.
Transactions differ significantly in complexity and, therefore, how much gas they consume. Bitcoin, the pioneer of trustless peer-to-peer transactions, only supports the rudimentary Bitcoin script. These simple transfers from address to address use little gas. In contrast, smart contract chains like Ethereum or Solana support a virtual machine and Turing-complete programming languages that allow for much more complex transactions. Hence, dApps like Uniswap require much more gas.
This is why it makes no sense to compare the TPS of different blockchains. What we should compare instead is the capacity for computation – or throughput.
All Blockchains have a (variable) block size and block time that determine how many units of computation can be processed per block and how fast a new block may be added. Together, these two variables determine the throughput of a blockchain.
Impacts of raising throughput the simple way
For blockchains to be maximally decentralized and inclusive, two key properties must be kept in check:
- Hardware requirements: A blockchain’s decentralization depends on the ability of even the weakest node to verify the chain and store its state. Therefore, the costs of running a node (hardware, bandwidth, storage) must be kept low to enable anyone to become a participant.
- State growth: State growth refers to how quickly the blockchain expands as throughput rises. Full nodes must store the entire history and validate the state. Ethereum’s state is stored and referenced using efficient structures, such as trees. As the state grows, new leaves and branches are added to it, making it ever more complex and time-consuming to perform certain actions. As the chain grows in size, it worsens the worst-case execution by nodes, which leads to an ever-growing time to validate new blocks. Over time, this also increases the total time it takes for a full node to sync.
Trying to raise throughput by simply increasing the block size and reducing block time would compromise decentralization, as it raises hardware requirements. Costly node hardware results in fewer nodes overall, weakening decentralization and network inclusivity. For example:
- Solana: ~1,900 nodes (1 TB + SSD, 1 Gbps, 256 GB RAM, 12 cores / 24 threads)
Notice that the bigger the CPU, bandwidth, and storage requirements for nodes required for throughput of a blockchain, the fewer nodes on the network – leading to weaker decentralization and a less inclusive network.
2. Time to Sync a Full Node
When running a node for the first time, it has to sync to all existing nodes, download, and validate, the state of the network all the way from the genesis block to the tip of the chain. This process should be as fast and efficient as possible to allow anyone to act as a permissionless participant of the protocol.
Taking Jameson Lopp’s 2020 Bitcoin Node and 2021 Node Sync Tests as an indicator, Table 1 compares the time it takes to sync a full node of Bitcoin vs. Ethereum vs. Solana on an average consumer-grade PC.

Table 1 demonstrates that increasing throughput leads to longer sync times because more and more data needs to be processed and stored.
While improvements to node software are constantly made to mitigate the challenge of the growing blockchain (lowering the disk footprint, faster sync speeds, stronger crash resilience, modularization of certain components, etc.), the nodes evidently still can’t keep pace. Very large block sizes could also destabilize consensus, compromising security.
Defining scalability
Scalability is among the most misrepresented terms in blockchain. While higher throughput is desirable, it’s only part of the puzzle.
Scalability means more transactions using the same hardware.
For that reason, scalability can be separated into two categories.
Sequencer scalability
Sequencing describes the act of ordering and processing transactions in a network. As previously established, any blockchain could trivially raise its throughput by expanding the block size and shortening its block time-up until a point at which the negative impact to its decentralization is deemed too significant.
But even tweaking these simple parameters does not provide the required improvements.
Ethereum’s EVM can, in theory, handle up to ~2,000 TPS, which is insufficient to service long-term block space demand. Solana took a different approach than Ethereum to scale sequencing and made some impressive innovations: taking advantage of a parallelizable execution environment and a clever consensus mechanism, which allows for far more efficient throughput. But despite its improvements, it is neither sufficient nor scalable. As Solana’s throughput rises, so do hardware costs for running nodes and processing transactions.
Solana’s example shows that trying to scale blockchains by raising sequencer scalability isn’t enough-beyond a point, raising sequencer scalability adversely impacts decentralization and security, two core aspects of blockchains.
Verification scalability
Verification scalability describes approaches that raise throughput without burdening nodes with ever-rising hardware costs. This is achieved through cryptographic innovations like validity proofs-the key to validity rollups’s sustainable scalability.
What’s a validity rollup?
Validity Rollups (also known as “ZK-Rollups”) move computation and state storage off-chain but keep a small amount of certain data on-chain. A smart contract on the underlying blockchain maintains the state root of the Rollup. On the Rollup, a batch of highly-compressed transactions, together with the current state root, are sent to an off-chain Prover. The Prover computes the transactions, generates a validity proof of the results and the new state root, and sends it to an on-chain Verifier. The Verifier verifies the validity proof, and the smart contract that maintains the state of the Rollup updates it to the new state provided by the Prover.
How do validity rollups scale with the same hardware requirements?
Even though L2 provers do require high-end hardware, they do not impact the decentralization of a blockchain because the validity of transactions is guaranteed by mathematically-verifiable proofs. Verifying the validity proof for a batch of transactions is as good as re-executing the transactions and checking the results.
What matters are the requirements to verify the proofs. Because the data involved is highly compressed and largely abstracted away through computation, its impact on nodes of the underlying blockchain is minimal.
Verifiers (Ethereum nodes) do not require high-end hardware, and the size of the batches does not raise hardware requirements. Only state transitions and a small amount of call data (or, ever since Ethereum’s Dencun upgrade went into effect, blob data as an alternative) need to be processed and stored by the nodes. This enables all Ethereum nodes to verify validity rollup batches using their existing hardware.
The more transactions, the cheaper it gets
On traditional blockchains, the more transactions are sent on the network, the higher the fees become for everyone, as block space fills up and users need to outbid each other in a fee market to get their transactions included.
For a Validity Rollup, this dynamic is reversed. Verifying a batch of transactions on Ethereum has a certain cost. As the number of transactions inside a batch grows, the cost to verify the batch grows at an exponentially slower rate. Adding more transactions to a batch leads to cheaper transaction fees even though the batch verification cost increases – because it is amortized among all transactions inside the batch. Validity Rollups want as many transactions as possible inside a batch – so that the verification fee can be shared among all users. As batch size grows to infinity, amortized fee per transaction converges to zero, i.e., the more transactions on a Validity Rollup, the cheaper it gets for everyone.
dYdX, a dApp powered by a Validity Rollup, frequently sees batch sizes of over 12,000 transactions. Comparing the gas consumption of the same transactions on Ethereum Mainnet vs. on a Validity Rollup illustrates the scalability gains:
- Settling a dYdX transaction directly on Ethereum mainnet: 200,000 gas
- Settling a dYdX transaction on StarkEx: <500 gas
Another way to look at it: Validity Rollups’ main cost scales linearly with the number of users within the same batch.
Scalability is only worthwhile with security
In theory, optimistic rollups provide nearly the same scalability benefits as validity rollups. But there is one important distinction: Validity rollups are more secure.
Since optimistic rollups’ method of combating fraud is to offer a dispute period of up to a week that enables network participants to challenge transactions suspected of being fraudulent, it depends on the integrity of the humans operating the network. If no one on the network challenges a fraudulent transaction, the user targeted with fraud will lose their funds.
Validity rollups rely on mathematically verifiable proofs to validate every single transaction they process, making it virtually impossible to process fraudulent transactions. Because of this, validity rollups are more secure than optimistic rollups-they provide scalability with the security to match.
Final piece of the puzzle-permissionless access to the rollup state
To guarantee the validity of transactions on a validity rollup, users only need to run an Ethereum node. But users and developers may want to view and interact with the state and execution of the rollup for various purposes. An indexing L2 node fills this need perfectly. Not only does it allow users to see the transactions in the network, but it is also a critical piece of infrastructure necessary for the ecosystem to function. Indexers like The Graph, Alchemy, and Infura; oracle networks like Chainlink; and block explorers are all fully supported by a permissionless, indexing L2 node.
Conclusion
Many approaches to blockchain scalability incorrectly focus on boosting throughput, neglecting the impact of throughput on nodes – ever-rising hardware requirements to process blocks and store history, inhibiting the decentralization of the network.
With validity proofs, a blockchain can achieve true scalability that doesn’t burden nodes with rising costs and or damage efforts towards decentralization. More transactions with powerful, complex computations for the same hardware are now possible, inverting the fee-market dilemma-the more activity on a validity rollup, the cheaper it gets!