Our Content

Research-driven builders and investors in the cybernetic economy

We contributed to Lido DAO, P2P.org, =nil; Foundation, DRPC, Neutron and invested into 150+ projects

Blockspace & Blobspace: a tale of two Ethereum products

1. Introduction

Blockchains are a type of Replicated State Machine operated by a network of nodes. Sybil resistance mechanism makes network participation permissionless, ensuring censorship resistance. The consensus engine forms a canonical blockchain state at any moment, providing resistance against double-spend attacks. Execution engine, a.k.a. the State Transition Function, in turn, governs how the blockchain progresses, by specifying how transactions are executed:

σt+1=Υ(σt,T)\sigma_{t+1} = \Upsilon(\sigma_t, T)

The inputs of the State Transition Function Υ\Upsilon are transactions TT and the current state σt\sigma_{t} , while the output is the next state σt+1\sigma_{t+1}. Transactions are collated into blocks, which are chained together using a cryptographic hash function. Every time the state transition happens, the blockchain processes one more block. The block has limited space, and the blockchain needs to choose which transactions should be included in the block. Thus, blockchains have to have some mechanisms like fees, transaction validity rules, etc.

There are various reasons to have transaction fees in blockchains: Fees are great for spam prevention because without transaction fees someone could overwhelm the network by sending millions of worthless transactions to the network, slowing everyone else down. Fees also make resource allocation efficient because they create a market mechanism that helps allocate the network’s scarce resources to users who are most willing to pay. Finally, transaction fees, alongside consensus rewards, are part of validators’ compensation for the effort, electricity, and hardware required for validating the network.

Ethereum transactions can be very simple and very complex, and thus the required transaction fees vary — the exact calculation is determined by gas. Gas is a unit of computational work needed to perform operations on Ethereum. Each operation costs a certain amount of gas depending on how sophisticated it is. Complex operations such as interacting with a DeFi protocol cost more gas, while simple operations such as sending ETH cost less gas. In addition to operations, there is also a gas cost associated with storing data on the blockchain. Following the London upgrade in August 2021, Ethereum uses the EIP-1559 mechanism for determining the base fee per one gas, for transaction inclusion. On top of it, users can also choose to pay a priority fee per one gas, to affect transaction ordering. Under the EIP-1559 mechanism, the gas price can increase or decrease up to 12.5% depending on how much gas was consumed by the previous block.

Following the Dencun upgrade in March 2024, Ethereum currently has two kinds of gas: regular block gas and blob gas. Regular block gas is used by transactions interacting with the Ethereum L1 state. Blob gas, on the other hand, is used by transactions that are posted to blobs — data storage spaces specifically designed and used by rollups. The data in blobs is stored and accessible within Ethereum for a few weeks, after which it is deleted. This offers rollups a cheaper alternative to solve the data availability problem, compared to posting their data in the calldata of regular Ethereum blocks.

In this article, we’ll dive deep into the economics of the main resources that Ethereum provides: we will first discuss Ethereum L1 blockspace supply and demand, and after that, we will cover the L1 blobs market, followed by the L2 blockspace economics.

2. L1 blockspace

Blockchains are designed to be permissionless, for both network validators and users. To achieve and maintain this openness, blockchains must minimize network overload and ensure broad node participation. The overload can occur on various levels, particularly within blockspace — the computational capacity of one block. This causes technical limitations on how many transactions can be processed. In this section, we dive deep into the resulting market structure of blockspace.

We first discuss the supply side of blockspace, both current and future. The current blockspace supply explains the dynamic under the EIP-1559 mechanism. Future blockspace supply takes a look into different proposals and ways to increase the blockspace supply and utilization efficiency. After that, we turn to blockspace demand, by discussing different transaction categories and private transactions. At last, we discuss the blockspace clearing price, showing that the EIP-1559 mechanism results in the supply-demand equilibrium gas price.

2.1 L1 blockspace supply

Blockchains have a scaling limit—validators cannot infinitely increase the block size. As blocks grow larger, two key issues arise:

  1. State Growth: Blockchains function as distributed ledgers, and blocks are the primary factor contributing to ledger expansion. Larger blocks accelerate the growth of the EVM state, requiring nodes to have more powerful and expensive devices to access the blockchain.

  2. Network Overload: To ensure that network validation remains democratically accessible across different geographies and user groups, blockchains must keep hardware requirements low. Larger blocks increase network load, raising both bandwidth and hardware requirements, limiting participation.

The Ethereum L1 blockspace target is determined by the validators. Recently there was a successful gas limit increase by 20%.

2.1.1 L1 Blockspace (gas) supply today From the merge in September 2022 until early February 2025, Ethereum’s long-term gas supply and block times have been constant. New blocks could be submitted every 12 seconds, and these blocks had a constant gas upper limit of 30 million gas. On average, however, blocks had 15 million gas utilization. Since March 2024 there has been a community effort to raise the gas upper limit from 30M to 36M — it was eventually widely adopted at the beginning of February 2025, and now the average block utilization is 18M gas.

EIP-1559. To explain exactly how Ethereum achieves this on-average constant 18M gas per block supply, we need to describe the EIP-1559 mechanism of how the gas price is determined. EIP-1559 gas pricing mechanism was introduced in the London Upgrade in August 2021, and allows for flexibility in blocks’ gas utilization (from 0 to 36M gas max), while sustaining a long-term average of 18M gas per block. On a high level, variability in gas utilization per block is used to determine demand changes, and the effect is passed to the gas price, pushing utilization to the target of 18M.

In more detail, gas fees consist of base and priority fees. The base fee is the minimum fee a transaction needs to pay to be included in a block that is used by the network to prevent spam transactions. The base fee is burned to decrease the supply of Ethereum. On top of the base fee, a user can choose to pay a priority fee that is a tip paid to a block builder to get a transaction included in a block quicker.

EIP-1559 can increase or decrease the base gas price by up to 12.5% depending on how close to 18M gas a block has based on the following formula:

base-feecurrent=base-feeprev×(1+18sizeprevsizetargetsizetarget)\text{base-fee}_{current}= \text{base-fee}_{prev} \times \left(1 + \frac{1}{8} \cdot \frac{ size_{prev} - size_{target}}{ size_{target}} \right)

where size is the size of a block, and the target size is 18M gas. Some examples:

  • If the previous block was completely empty, then the next block’s base gas price will decrease by 18(018106)/18106=12.5%\frac 1 8 \cdot (0-18\cdot 10^6)/18\cdot 10^6=-12.5\%.

  • If the previous block consumed 30m gas, then the gas price will increase by 18(3610618106)/18106=12.5%\frac 1 8 \cdot (36\cdot 10^6-18\cdot 10^6)/18\cdot 10^6 = 12.5\%.

  • (general case) If the previous block consumed YY gas in the block, where 0Y361060 \le Y \le 36 \cdot 10^6, then the gas price will change by X%=18(Y18106)/18106X \%= \frac 1 8 \cdot (Y-18\cdot 10^6)/18\cdot 10^6 where 12.5%X%12.5%-12.5\% \le X\% \le 12.5\%.

Side quest. An interesting mathematical question came up in the process of our study: if block builders wanted to not increase the base fee on average while maximizing the amount of gas blockchain processes, what kind of strategy they should use in posting blocks? Put another way, what’s the optimal way to pack the gas in blocks? A more precise problem formulation, along with some beautiful solutions in the comments, can be found here:

2.1.2 L1 blockspace (gas) supply in the future

There are two ways to increase the throughput of Ethereum. Either i) using gas more efficiently by compressing data or ii) by increasing the block gas limit. Data compression is a different kind of method for a transaction with the same contents to take up less data. Increasing the block gas throughput can be achieved by either increasing the block size or by reducing the block time. Increasing the block gas limit is not an easy task as it can lead to problems that are discussed below such as too rapid increase of network size.

Data compression: EIP-7702, included in the Pectra upgrade scheduled for Q2’25, will make improvements to transaction compression using account abstraction. For example, the size of an ERC-20 transaction will decrease to 128 bytes from the current 154 bytes. This is achieved mainly by aggregating signatures of transactions, reducing the overhead associated with multiple individual signatures. However, it might take time for the upgrade to be widely adopted and for the network to see the benefits of the upgrade as ERC-4337 account abstraction has failed to be widely adopted so far.

Data compression efficiency of ERC-20 transactions under various proposals

Fusaka, the upgrade coming after Pectra, which is expected to happen earliest in 2026, might include EIP-7692. It introduces EVM Object Format (EOF) which is an improvement on EVM that, among other things, results in a decrease of 9% of calldata gas used by EOF-smart contracts. Besides decreasing the gas used by smart contracts the upgrade will also improve contract deployment, introduce versioning support, enhance code validation, and introduce modularity.

Stateless compression and stateful compressions mentioned in the picture are still years away. Ideal stateless compression will decrease the size of an ERC-20 transaction to 75 bytes. This is achieved mainly by removing redundant gas data and empty data fields.

Block size increases: In addition to data compression, the supply for L1 gas can also be increased with block size increases. As mentioned above, one of the bottlenecks in Ethereum’s throughput is the size of the blockchain state and its growth. If the size of the blockchain’s data grows too fast, then storage and memory requirements for running a node might become problematic to run on consumer hardware. In the long-term, there is a vision to reduce the data nodes need to store locally which could mean that the throughput of the blockchain can be increased as the size growth rate is no longer as big of a concern. Before describing various parts of this vision, we cover terminology in more detail.

History in the context of Ethereum refers to the complete record of all transactions and events that have ever taken place on the blockchain. A state, on the other hand, refers to a snapshot of all accounts, balances, smart contract storage, and other critical data at a given time. Improvement proposals in Ethereum’s data management discuss changing the roles of different actors in data storage and how to implement such changes while maintaining decentralization. The data management improvement proposals typically discuss how to decrease the storage requirements for block builders, responsible for creating new blocks, and validator nodes, responsible for confirming new blocks, by outsourcing part of the history and state storage to other actors.

There are also different kinds of nodes in Ethereum. Full nodes store the current blockchain state, and full history (blocks and headers). Full nodes help secure the blockchain by keeping track of the latest blocks, propagating data, and updating the state accordingly. Full nodes can validate every transaction and smart contract interaction. If a full node stakes 32 Ethereum and participates in Proof-of-Stake they can also propose and validate new blocks. An archive node is a node that stores the entire blockchain including all past state changes. Running an archive node requires over 10 TB of storage and they are mainly used by entities like block explorers and data service providers. In addition, there are also light clients — they require minimal data storage as they only store block headers. Light clients rely on full and archive nodes for transaction and state verification but are useful for low-power applications such as mobile wallets.

There are some proposals to make those nodes more efficient, trustless, and cost-effective:

  • History expiry discussed under EIP-4444 would allow validator nodes to optionally discard historical block data older than a year while ensuring the data remains accessible by incentivizing archive nodes to keep the data. The idea behind this upgrade is to reduce the storage space required by validator nodes as they would not need to store old data that they are unlikely to need.

  • State expiry allows state data that is not frequently used to become inactive. Inactive data can be ignored by full nodes until it is resurrected to become active again. The state could expire in multiple ways such as by rent or time. In expiry by rent model accounts would be charged “rent” and if the balance used to pay rent reaches zero the account would expire. In expiry by time model the accounts would become inactive if they are not interacted with for a certain amount of time. It is important to note that in neither of these expiry models, the inactive state is deleted, it is just stored separately from the active state by a history provider. When someone wants to access an expired state they ask for proof of the historical state from a history provider and submit the proof and their transaction to the network making the state active again.

Statelessness is a change to how Ethereum nodes handle state data. In weak statelessness only block producers need access to full state data and other nodes can verify blocks without local state data. For this to be possible Verkle trees must be implemented in Ethereum clients. As of now, Verkle trees might be implemented earliest in Fusaka, which is expected to happen earliest in 2026. In strong statelessness, no nodes would need access to the full state data. Instead, there is a separate network of specialized “portal nodes” that store portions of the state that others can query to access state data. Strong statelessness has been studied by Ethereum researchers but as of now, the consensus is that weak statelessness is sufficient for Ethereum’s scaling. Both weak and strong statelessness would decrease the hardware requirements for running a validator significantly.

In addition to the state growth, another bottleneck in Ethereum’s growth is the amount of data each block consists of. Currently, an Ethereum block can have a maximum size of 7.15 MB, excluding blobs that can add 0.75 MBs of data on top of that, if a block only consists of data-heavy transactions. The average size of a block, however, is around 100 KB. The current maximum size of a block and blobs becomes problematic when potentially increasing the block size or the number of blobs as certain Ethereum clients have a capability of only handling up to 10 MB of data per block. EIP-7692 (EOF) which will be included in Pectra happening in Q2’25 will solve this problem. It proposes to increase the cost of calldata parameters to limit the number of data-heavy transactions that fit in a block. This would decrease the maximum size of a block to 0.72 MB, excluding blobs, while having little effect on the majority of transactions.

In addition to long-discussed upgrades, EIP-7782 has surfaced recently proposing to decrease block time from 12 seconds to 8 seconds. This effectively increases Ethereum’s throughput by 33%. In addition to increasing the throughput, decreasing block time also has other positive effects such as decreasing the amount of MEV and improving the user experience.

Having now discussed current and future blockspace supply, let’s look at the L1 blockspace demand.

2.2 L1 blockspace demand

There are different kinds of applications on top of Ethereum that users can interact with. While certain interactions with applications, such as sending USDC to your friend, might not be time-sensitive other interactions, such as arbitraging prices between two decentralized exchanges, are. Typically the amount of time-sensitive transactions increases as the price volatility increases as this leads to more arbitrage opportunities. This can result in a momentary increase in demand as there are more total transactions wanting to be included in a block. In addition to price volatility, other events can drive momentary demand such as the minting of a new hot NFT collection.

Below is a visualization of what type of transactions Ethereum transactions consist of. Out of all transactions processed by the blockchain transactions interacting with stablecoins, lending protocols, ERC-20 tokens, NFT trading protocols, and decentralized exchanges are categorized. The category with the most activity is ERC-20 transfers. This is logical as stablecoins are a subgroup of ERC-20 tokens and lending and decentralized exchange protocols interact with ERC-20 tokens. The second most active category is stablecoin transfers. From the categories lending protocols and NFT trading seem to have limited activity.

Percentage of all transactions for different transaction categories over time on Ethereum

L1 transactions primarily originate from DEX trades and token transfers, including stablecoin and ERC-20 transfers. However, not all transactions are submitted directly to validators. In Ethereum, there are different types of order flow mechanisms. Let’s explore this in more detail.

2.2.1 Private and public order flow

With Proposer-Builder Separation (PBS), Ethereum's blocks are built by Block Builders, while validators verify and propagate them across the network. This mechanism introduces two ways to submit transactions in Ethereum:

  1. Private Orderflow: Transactions can be sent directly to Block Builders, bypassing the public mempool.

  2. Public Mempool: Transactions are broadcast openly, allowing anyone to view and include them in a block.

Transactions sent privately to block builders typically are MEV bundles, high-value transactions that would be exploited by MEV bots if sent publicly, and transactions where the builder and sender have a partnership where the sender sends transactions exclusively to the builder or a few builders. MEV bundles are bundles of transactions that result in a profit for the bundle sender if executed. Such transactions include arbitrage trades, sandwiching a transaction by frontrunning and backrunning it, and transactions liquidating a position on a lending protocol. High-value transactions are transactions that could be exploited by MEV bots if sent publicly such as a big swap transaction in a decentralized exchange that could be sandwiched. Below is a graph that shows the percentage of private orders by month.

Percentage of private transactions

In addition to bundles extracting MEV or transactions trying to avoid being victims of MEV extraction private transactions can also be sent due to an exclusive order flow agreement between a builder and transaction sender. For example, Bananagun bot and Titan builder used to have an agreement where Bananagun bot would only send their transactions to Titan. This results in an information asymmetry between builders with exclusive orders and other builders which results in a competitive advantage for the builder with information advantage.

Now let’s put demand and supply together and look at blockspace price equilibrium.

2.3 L1 blockspace price equilibrium

Observing Ethereum’s blockspace supply and demand together, the supply curve has constant quantity — it increases quite rarely, and today it oscillates around 18m gas. The demand curve naturally slopes downwards with the quantity decreasing as the price increases.

As we described above, the gas price fluctuates based on whether blocks have more or less than half of gas utilization, which in turn depends entirely on demand. What follows is that the Ethereum base, if averaged over a long enough time horizon to account for spikes, represents the economic equilibrium, where demand and supply intersect. To illustrate this, suppose that a popular ERC-20 token has been launched, and demand for blockspace increased. This means that the demand curve is shifted to the right, and at the current price level supply is smaller than demand. As long as demand>supply, the block utilization will be >1/2, and the base fee will increase up until the new equilibrium price. A similar dynamic in the opposite direction holds if demand decreases.

The long-term supply can only increase via protocol upgrades, similar to the pumpthegas movement recently pushing the gas supply from an average of 15M per block to an average of 18M per block. This results in the long-term gas equilibrium being higher than before the upgrade — however, it is also possible for demand to naturally expand, as developers and users become more confident that the fees will not spike too much.

At times there are also events when supply quantity decreases. This can be caused, for example, by a bug in the validator client software leading to an increase in slots validators miss [1, 2], or by a block validator experimenting with a new kind of MEV maximizing strategy such as timing games. This naturally results in a decreased equilibrium gas.

3. L1 blobspace

With Ethereum’s rollup-centric roadmap, the network has shifted to a new scaling approach where its primary roles are providing Data Availability (DA) and Settlement for Layer 2 solutions. However, the increasing DA demands from rollups have driven higher demand for L1 resources, creating scalability limitations for L2s.

Dencun upgrade in March 2024 included EIP-4844 Proto-Danksharding which introduced data “blobs”. Blobs are temporary data storage that gets deleted from Ethereum after about 2 weeks. The Blobs are designed to provide cheap and scalable access to ETH DA for Rollups. This section first discusses both the current and future supply of blobspace, then it analyses the demand for blobspace and finally looks at the price equilibrium. The current blobspace supply explains the supply dynamic under the current implementation. Future blobspace supply takes a look into different proposals and ways to increase the blobspace supply and its efficiency. Blob demand analyzes the demand for blobs by different L2s, the trade-off between security and cost, blob sizes, and data utilization optimization. Blobspace price equilibrium discusses how the pricing of blobspace differs from the pricing of L1 blockspace.

3.1 L1 blobspace supply

Blobs share similar limitations with Ethereum L1 blockspace. Their data size, count, and other parameters are determined by the Ethereum protocol. While blobs have significantly improved scalability for Ethereum L2s, there is still substantial room for future upgrades to enhance their scalability further.

In this section, we will first explore the current blobspace supply and potential future upgrades. Then, we will analyze the demand dynamics for blobs in more detail.

3.1.1 L1 Blobspace supply today

Each slot can have up to 6 blobs with a target of 3 blobs. Pricing of blobs follows EIP-1559 similar to block gas. However, as the gas target for a block is 18 million gas and the blob target is 3, the number of blobs per slot feels a lot more discrete than the amount of gas in a block. Each blob can have up to 128 KB of data. The price of a blob is the same regardless of how much data it consists of. As of today each blob is unique to a single L2 and two L2s cannot share a blob. In addition to blobs, L2s can also post their data on blocks but blockspace tends to be significantly more expensive, so it typically only makes sense to post L2 data on blobs.

Now let’s look at different potential ways to increase supply in the future.

3.1.2 L1 Blobspace supply in the future

Similar to the L1 gas supply there are two ways of increasing the throughput of data L2s post on L1. Either i) by being more efficient when it comes to posting data on L1 or ii) by increasing the amount of data that can be posted on L1.

Efficiency improvement

If the entire 128 KB of blob data is not used the blob sender will still pay for the 128 KB of blob space. This results in an inefficient usage of blob space as some L2s might, at times, decide to post blobs that are under 128 KB in size. There have been discussions about blob aggregators that would allow multiple L2s to share a blob resulting in a more efficient use of blob space. Such a solution would be the most useful for L2s with low traffic as they would no longer need to wait long to post a big batch of transactions, post frequent small batches, or post their data on top of blocks.

In addition to efficiently utilizing blob space ZK L2s can also potentially utilize proof aggregation protocols. Proof aggregation protocol is a protocol that generates a ZK proof that proves the validity of multiple other proofs effectively aggregating multiple ZK proofs into a single proof. This moves the verifying costs of the ZK proofs into proving costs. Proof aggregation makes sense if the cost of generating a proof for multiple proofs being aggregated and verifying it costs less than verifying the multiple proofs individually.

Amount increase

Pectra expected to happen in Q2’25 will include EIP-7691 which will increase the target number of blobs in a block from 3 to 6 and the maximum number of blobs in a block from 6 to 9. This will double the effective throughput of blobs.

Fusaka expected to happen earliest in 2026 might include 1D PeerDAS (Data Availability Sampling). With PeerDAS nodes are no longer required to download all blob data for verification. Instead, they only verify randomly selected portions of the blob data and participate in subnets, where a peer set of nodes together covers all the blob data. This results in improved scalability for L2s while ensuring the availability of blob data. Based on some estimates introducing 1D PeerDAS could allow an increasing number of blobs per slot from 6 to up to 128.

In addition to 1D PeerDAS, 2D PeerDAS also known as Danksharding can be introduced in the distant future. Danksharding uses more complicated sampling by sampling data in two dimensions instead of one. This is a more lightweight solution for individual nodes compared to 1D PeerDAS. Based on some estimates 2D PeerDAS could achieve up to 256 blobs per slot.

Next, we will take a deep dive into blobspace demand.

3.2 L1 blobspace demand

Layer 2s post data on Ethereum to secure their blockchain with Ethereum. They collect their transactions in a batch and post compressed data about the batch on top of Ethereum. There are multiple questions L2s need to consider when it comes to blobs. L2s, for example, have to choose whether to post partially empty blobs to secure the L2 with Ethereum and whether to post data on top of blobs or blocks. There are also additional questions when designing rollups that need to be considered such as whether the rollup should be Optimistic or ZK.

L2 transactions are included in L2 batches that are batched together and included in the L1 blob

Two different types of L2s are Zero-Knowledge (ZK) and Optimistic. ZK L2s collect transactions into a batch and generate a ZK proof proving that all state transitions are valid. Afterwards ZK L2 posts either state differences or transaction data and the ZK proof on Ethereum. On the other hand, optimistic L2s collect transactions into a batch and post batch transaction data and state roots on Ethereum. The key difference between the two is that while ZK proof posted on Ethereum immediately proves the validity of the ZK L2 batch, optimistic L2 relies on a fraud-proof window where anyone can challenge the data posted on Ethereum by the optimistic L2. As optimistic L2s rely on fraud proof window they need to post transaction data while ZK L2s need to only post state diffs consisting of less data than the whole transaction data. However, as of today only zkSync, Starknet, and Paradex post state diffs while other ZK L2s like Linea and Scroll choose to post transaction data resulting in higher data consumption than optimistic L2s when combined with the ZK proof.

Blobs are temporary data storage and get deleted from Ethereum after about 2 weeks. L2s can choose between posting data on blobs or calldata. The trust assumption between block and blob data is the same, however, as blob data is only temporary, and block data stays on the ledger forever it is typically cheaper to post data on blobs instead of blocks.

As visualized below the total amount of data posted on blobs has had a positive trend since the launch of blobs. However, the trend has stopped around November 2024. This is due to blobs reaching the target of an average of 3 blobs per slot meaning posting more blobs for an extended period would result in the submitters paying a significant amount of Ethereum for the blobspace. At the same time, the average size of blobs first decreased between the launch of blobs and Q3 2024 after which the average size increased. As the average number of blobs has reached the target of 3 blobs per slot it has become more expensive to post blobs that are not nearly full of data.

Average and total blob data consumption since the launch of blobs

The number of blobs consumed per L2 over time shows that the increase in demand is mainly driven by Base. In addition, World Chain has had an increasing trend in the consumption of blobs since October.

Distribution of L2 blob demand across L2s

Base, Taiko, Arbitrum, World Chain, and Optimism are the biggest rollups based on the number of blobs posted on Ethereum over the past month. These rollups counted for 37.1%, 18.3%, 12.2%, 7.7%, and 4.8% of all posted blobs, respectively, out of the 703 540 blob dataset. Other rollups outside of the 5 biggest together posted the other 18.2% of blobs and on average a slot had 3.56 blobs.

Distribution of L2 blob demand by L2

3.2.1 Security vs cost

The L2 transactions that happened later than the latest batch posted and finalized on top of Ethereum have not been secured by Ethereum. This means that if L2 posts batches to Ethereum infrequently there might be long periods where L2 transactions are not secured by Ethereum. However, on the other hand, posting half-empty batches frequently costs more. Different L2s approach this trade-off between security and cost differently. For example, Zircuit posts a batch every 3 minutes while ZKSync posts a batch every time they have a full blob worth of data.

Ethereum currently processes the finality of transactions in epochs that consist of 32 slots. For an epoch to be considered final the network has to process 2 newer epochs. This means that if a rollup wants to secure its transactions on mainnet as frequently as Ethereum permits it should post them in a blob before the ongoing epoch concludes. If a rollup does not have enough transactions to fill up a blob’s worth of data this becomes a trade-off between minimizing the costs of running an L2 by only submitting full blobs and maximizing the security of the rollup by submitting blobs that are not full of data.

Hypothetical indifference curve to illustrate the trade-off between cost and security for an L2

3.2.2 Blob sizes

From the different rollups Base, Taiko, Arbitrum, World Chain, Zircuit, and Metal post blob(s) every epoch. While the median size of blobs posted by Base, Arbitrum, and World Chain is over 131 000 bytes, out of the maximum 131 072 bytes, Taiko, Zircuit, and Metal posted blobs whose median is below 131 000 bytes. This suggests that Base, Arbitrum, and World Chain have so much data to post on blobs that they consistently post full blobs worth of data every epoch. On the other hand, it seems that Zircuit and Metal choose to post a blob every epoch to maintain the security of their rollup at the smallest possible granularity despite not having enough data to fill up the whole blob. Taiko has enough data to consistently post full blobs worth of data every epoch, however, it consistently posts unoptimized blobs in terms of data usage.

Observing the sizes of posted blobs Blast, Rari, Mint, Fuel, Lisk, StarkNet, Paradex, and zkSync Era post blobs with a median size of over 131 000 bytes in addition to Base, Arbitrum and World Chain. This suggests that these rollups have chosen to minimize the costs of running an L2 by only submitting blobs when they have a full blob worth of data to post. In addition, Linea, Scroll, and Swan Chain post blobs with a median size of over 128 000 bytes. This suggests that these rollups also follow the cost-minimizing strategy when it comes to the posting frequency of their blobs but posts slightly unoptimized blobs in terms of data usage.

In addition to these strategies, some rollups seem to have adopted a middle ground. Out of the top 20 biggest rollups Kroma has adopted a strategy where it has blobs in 71% of epochs with the median size of a blob being 57 152 bytes. If Kroma was willing to compromise on the frequency its rollup is secured by Ethereum it could halve its L1 costs by posting blobs double the size at half the frequency.

3.2.3 Batches and data utilization optimization

Rollups typically post blobs in batches. Out of the biggest 20 rollups Taiko, Zircuit, Metal, Kroma, Paradex, Scroll, Swan Chain, and Morph typically post 1 blob batches. Out of these rollups, Taiko and Swan Chain sometimes post batches of 2 blobs while others only post 1 blob batches. Mint is the only big rollup posting batches on 2 blobs. Out of the big rollups Arbitrum, World Chain, and zkSync Era typically post batches of 3 blobs. Out of these zkSync Era is flexible sometimes posting batches of other number of blobs. No big rollup tends to post batches of 4 blobs. Base and Optimism tend to post batches of 5 blobs and Blast tends to post batches of 6 blobs. Out of the biggest rollups Rari, Linea, Fuel, Lisk, and StarkNet are flexible in the size of batches they post.

We now describe various ways to optimally post data to blobs. The most optimal data utilization is when all blobs posted by a rollup are full. Out of the biggest rollups Paradex, zkSync Era, and StarkNet have the most optimal data utilization in blobs as they only post blobs with over 131 071 bytes of data out of the maximum 131 072 bytes.

After that. the next optimal data utilization is when all blobs in an epoch posted by a rollup except one are full. This ensures that all transactions are secured by Ethereum as frequently as possible while also optimizing the data utilization of a rollup. No major rollup is utilizing such data optimization.

After that, the next optimal data utilization is when all batches posted by a rollup consist of full blobs except for one blob in each batch. This suggests that the rollup has optimized data utilization within each batch but does not consider cross-batch optimization within epochs. Such rollups include Arbitrum, Base and World chain that post multiple batches of blobs in an epoch where one blob tends to be unoptimized per batch.

Finally, in terms of data utilization, the least optimal solution is to post multiple blobs in batches that are not full. Taiko is a prime example of suboptimal data utilization. Taiko posts blobs in batches of 1, on average posts 13.6 batches per epoch and its 90th percentile of blob size is 127 974 bytes. Considering the average blob size of 118 827 bytes if Taiko was to fully optimize its blob posting it could decrease its L1 costs by 9.6% with slight to no compromise in security:

1Average size of Taiko blobsMaximum blob size=1118827131072=9.6%1 - \frac{\text{Average size of Taiko blobs}}{\text{Maximum blob size}} = 1 - \frac{118 827}{131 072} = 9.6\%

Similarly if Metal, with a 90th percentile blob size of 353 bytes, was to only post a single batch in an epoch instead of the average of 1.6 they could decrease their L1 costs by 38% without compromising on security:

11Average number of Metal batches per epoch=111.6=38%1 - \frac{\text{1}}{\text{Average number of Metal batches per epoch}} = 1 - \frac{1}{1.6} = 38\%

Overall the average size of a blob is 118 300. This means that if a blob aggregator with perfect data utilization was to be implemented and adopted by L2s the amount of data being included in an average blob could be increased by

1118300131072=7.5%1 - \frac{118 300}{131 072} = 7.5\%

3.2.5 Average blob data posted per L2 transactions

Comparing the amount of L1 blob data the average L2 transaction consumes, zkSync Era requires the least amount of data per L2 tranasaction. Scroll posts over double the L1 blob data per L2 transaction and Linea over 50 times the data. From optimistic rollups, Blast posts the least amount of L1 blob data per L2 transaction. Arbitrum posts the second least amount of data from optimistic rollups, posting almost double per transaction compared to zkSync Era and 10% more than Blast. Arbitrum, Base, Zora, and OP Mainnet post very similar amounts of L1 blob data per L2 transaction with OP Mainnet posting 11% more data compared to Arbitrum. Out of the studied rollups, Linea posts the most L1 blob data per L2 transaction, posting 30 times more data compared to Blast and 50 times more data compared to zkSync Era.

Based on these metrics zkSync Era seems to perform the best in terms of L1 data used per transaction and log which is logical considering it only posts state diffs instead of all transaction data. In terms of L1 blob gas used per transaction Linea and StarkNet perform the worst from ZK rollups and from the optimistic rollups OP Mainnet performs the worst. In terms of L1 data used per log Base performs the worst. Based on the amount of L2 gas per posted L1 data Zora and Blast perform the best, while Scroll and Base perform the worst.

Now let’s put demand and supply together and look at the blobspace price equilibrium.

3.3 L1 blobspace price equilibrium

Blobspace supply and demand follows a similar logic to blockspace for the most part with few nuances. While it might be crucial for certain users, such as MEV searchers, to have their transaction included in a certain block or even in a certain location in a block, there is no such difference between blobs inside an epoch making blobspace more fungible than blockspace. This most likely makes the pricing of blobspace less volatile compared to blockspace.

While a block can have anywhere between 0 and 36 million gas there can only be between 0 and 6 blobs per slot. This makes the number of blobs a lot more discrete. Furthermore, while a blob has a maximum size of 128 KB the L2s have to pay for the whole blob even if they post a blob that is smaller in size.

The shape of the demand curve for blobs should be different than for blocks as an L2 could post its data on blocks instead of blobs if the price of blobs is above the price of blocks. This essentially should set a ceiling to the price L2s are willing to pay for blobs. In addition, the demand for blobs reached its current long-term target average supply of 3 blobs per slot recently. Before that, the demand curve was most likely more steep as L2s knew that any price increase was most likely only temporary.

4. L2 blockspace

Similar to L1 the L2 blockspace is the network’s computation capacity available within one block. The main difference between L1 and L2 blockspace is that as L2s are typically centralized they can be more flexible in the amount of blockspace they supply on their network based on the demand. This section first discusses the supply of blockspace in different L2s, then it analyses the demand for blockspace and finally looks at how the price equilibrium differs from L1.

4.1 L2 blockspace supply

The L2s’ block creation is typically centralized. There is typically a centralized sequencer that processes the transactions it receives and creates blocks out of them. The centralized nature of L2s’ block creation allows L2s to change variables associated with block creation, such as gas limit, more flexibly than Ethereum with more decentralized block creation. For example, Base has increased its gas limit 9 times between June and December 2024.

The cost of including a transaction in an L2 block varies from L2 to L2. Typically the gas price of a transaction consists of both the price of including a transaction to an L1 blob as well as the price of L2 gas. The L2 gas price is revenue collected by the L2 for providing the service of L2 while the L1 gas price is the cost associated with securing the L2 on Ethereum. When sending a transaction on L2 the L2 typically estimates the price associated with securing the transaction on top of L1 and charges the user the estimated L1 blob gas price as well as L2 gas price.

The L2 gas pricing mechanism varies from L2 to L2. Optimism and default OP stack determine L2 gas price using the EIP-1559 pricing mechanism. In addition, Optimism and default OP stack have priority gas auction where users can get their transactions included quicker by tipping the sequencer. Some L2s built with OP stack, such as Base, also follow the same L2 gas pricing logic. On the other hand, some L2s built with OP Stack, such as Arbitrum and Scroll, have implemented their own L2 gas pricing logic. Arbitrum has replaced the EIP-1559 pricing mechanism with its own “exponential mechanism” which reacts to demand changes faster. In addition, Arbitrum’s transaction queue follows first-come-first-served logic where users do not benefit from tipping the sequencer. Scroll has altogether removed the L2 base fee mechanism and its transaction queue follows gas priority auction similarly to how Ethereum worked pre-EIP1559.

Next, let’s look at L2 blockspace demand.

4.2 L2 blockspace demand

Below are visualizations of what type of transactions different L2s transactions consist of. Out of all transactions processed by the blockchains transactions interacting with stablecoins, lending protocols, ERC-20 tokens, NFT trading protocols, and decentralized exchanges are categorized. In all L2s category with the most activity is ERC-20 transfers. This is logical as stablecoins are a subgroup of ERC-20 tokens and lending and decentralized exchange protocols interact with ERC-20 tokens. The second most active category is either stablecoin transfers or decentralized exchange trades depending on the blockchain with Arbitrum and Optimism having more activity in decentralized exchanges and others having roughly as much activity in both decentralized exchanges and stablecoin transfers. From the categories lending protocols and NFT trading seem to have limited activity across all blockchains. Scroll is the most active blockchain in terms of lending with lending protocol transactions typically consisting of a few percentages of all activity and peaking at 11% of all activity. Blast is the most active blockchain in terms of NFT trading with NFT trading consisting between 5% and 15% during the second quarter of 2024 and other times being non-existent.

Percentage of all transactions for different transaction categories over time on Blast

Percentage of all transactions for different transaction categories over time on Linea

Percentage of all transactions for different transaction categories over time on Arbitrum

Let’s finish off by putting L2 blockspace demand and supply together.

4.3 L2 blockspace price equilibrium

The supply curve for L2 blockspace is completely different from Ethereum’s blockspace supply logic. The equilibrium quantity for the number of blobs is currently 3 per slot. The equilibrium quantity of L2 gas varies from L2 to L2, but for example, for Base and Optimism, the equilibrium quantity is half of the gas limit as it follows EIP-1559. The supply curve varies from L2 to L2 but is typically a combination of both the supply function for L1 blob space and L2 blockspace.

If the price of L1 blob space increases as a result of increased demand from an L2 it will also affect the pricing of the L2 blockspace of other L2s due to an increase in L1 blob pricing. Assuming L2 users are at least partially price elastic increase in L1 price should result in a lower price users are willing to pay for L2 blockspace on top of the L1 blob price. This could lead to a situation where a demand shock in one L2 affects all L2s via the L1 blobs.

As Base, Optimism, and default OP stack follow the EIP-1559 pricing mechanism for their supply the supply curve for L2 gas, excluding the impact of L1 blob space, should be roughly the same as Ethereum. On the other hand, Arbitrum and Scroll have implemented their own L2 gas pricing logic and thus their supply curve should significantly differ from Ethereum’s. Arbitrum’s “exponential mechanism” reacts to demand changes faster than EIP-1559 which indicates that the slope of Arbitrum’s supply curve is steeper than that of Ethereum’s. Scroll does not have an L2 base fee mechanism and uses gas priority auction similar to pre-EIP-1559 Ethereum suggesting that the shape and slope of the supply curve can change from time to time in response to potential demand shocks.

In terms of demand, the curves’ shapes and slopes should differ from L2 to L2. For example, differences in DeFi activity and base fee logic of L2s impact the MEV opportunities available at different price levels in different L2s making the supply curves different between L2s. Additionally, certain L2s with a lot of activity that is not time-sensitive should be more price-elastic than ones with a lot of time-sensitive activity.

Finally, the pricing equilibrium might not be favorable for all L2s. If there is too little supply for blobs this might result in a situation where big L2s keep pushing the demand up the demand curve increasing equilibrium price higher and higher. This could result in a situation where small L2s are priced out for having a chance to use blobs and they would need to rely on posting their data on L1 blocks instead.

As a final supplemental artifact, we developed a model that takes as an input demand do L2s, supply of blobs, and outputs whether or not the blob gas fee is in price discovery.