The Ethereum community is in the midst of a fundamental phase of technological advancement. At the heart of this is the scaling and securing of the network through so-called zero-knowledge virtual machines (zkVMs), which are designed to make the execution and verification of Ethereum transactions more efficient, cheaper, and more secure. This article provides an overview of the current state of zkVM technology and its growing importance for Ethereum, particularly with regard to the ambitious goal of real-time proving.
The Role of zkVMs in the Ethereum Ecosystem
Zero-knowledge virtual machines make it possible to cryptographically prove the execution of a smart contract or transaction without revealing all the calculation steps. This approach not only improves the scalability and security of blockchains, but also opens up new possibilities for privacy-friendly and high-performance applications.
In the Ethereum context, zkVMs are increasingly seen as a central element of a “proof-based” architecture. In contrast to the classic method, where all nodes re-execute transactions, zkVMs could guarantee the validity of a block through compact, cryptographically secured proofs. The goal is to generate and verify these proofs in real time—i.e., within the block time of twelve seconds.
Why Real-time Proofs are Important
Real-time proving is a crucial milestone on the road to mass adoption of Ethereum. If proof of complete Ethereum blocks can be generated within the block time, this will have far-reaching advantages. First, validators and even mobile devices such as smartphones or smartwatches can become fully-fledged, verifying nodes. Second, the dependency on the re-execution paradigm would be eliminated, which is particularly relevant for security aspects and the scaling of rollups.
The introduction of so-called “native rollups” also becomes realistic with real-time proofs. These rollups could consume Ethereum gas directly and integrate synchronously into the L1 ecosystem. In addition, synchronous interaction between rollups allows atomic interactions, such as for flash loans or arbitrage – an advantage over today’s asynchronous architecture.
Related article: Ethereum at a turning point? Why institutional investors could be the key and the Pectra upgrade is essential for the EVM community
Transparency and Metrics for zkVMs
A significant step toward standardization and comparability of zkVMs is the introduction of a publicly available evaluation model. Each zkVM is evaluated according to an eight-part scheme that takes into account performance (verification time, proof size) and security aspects (audits, formal verification, post-quantum security, bug bounties). This evaluation scheme is known as a “pizza chart” and clearly displays the properties in a graphical format.
In addition, technical details about the hardware used, proving systems, supported instruction sets, and recursion strategies are made transparent. This open approach is intended to enable both developers and users to make informed decisions about zkVMs.
Advances in Proof Verification in the Browser
Another milestone is the ability to verify zk proofs directly in the web browser. Initial prototypes are already achieving verification times of less than 50 milliseconds for Groth16- and PlonK-based proofs. Verification takes place on a single CPU thread – without a server or additional software. In the long term, this development could enable fully verifiable wallets that run on end devices.
Ethereum Quiz
Test your Ethereum knowledge!
Prover Killers: Stress Tests for zkVMs
To ensure the robustness of zkVMs, a system of so-called “prover killers” has been introduced. These are specifically designed EVM transactions that are particularly difficult to prove at the cryptographic level. These stress tests focus on known “ZK-unfriendly” operations such as SHA256, elliptic curve calculations, or ModExp. The goal is to remain within the target time of twelve seconds even in the worst-case scenario, thus ensuring that the systems are ready for production.
Advances in Data Provision
A bottleneck in the generation of ZK proofs is often the preparation of the relevant data: Every access to the Ethereum state – whether read or write operation – requires the creation of Merkle or memory paths. Instead of numerous RPC calls for individual accounts, a new procedure allows these paths to be combined into a single request. This significantly reduces the so-called data fetching overhead and brings the process close to real time.
Funding through Grants and Competitions
To further promote innovation, a funding program for zkVM teams has been launched. Three grants of US$100,000 will be awarded to zkVM projects that are capable of generating proofs under real-time conditions on moderate hardware. Requirements include an open-source prover, real Ethereum block proofs, and continuous proof of improvement. Further competitions and benchmarks are already in preparation to systematically document progress.
Related article: The Future of Ethereum Staking – Technologies, Challenges, and Opportunities
Technological Competition among zkVMs
There are currently over 20 zkVM projects in active development or production. Some focus on maximum performance, others on maximum compatibility, reusability, or data protection. Projects that rely on modular architectures are particularly dynamic. These allow different proving backends, instruction sets, and compiler chains to be freely combined depending on the use case and target platform.
Many zkVMs are based on the RISC-V standard, but some pursue their own ZK-optimized instruction sets to further reduce proof sizes and trace depth. Other approaches use LLVM-compatible compilers or even fully functional languages such as Lean or Rust with formal verification. Developments that do not require recompilers or specially developed assembly stacks, thereby reducing points of attack, are particularly promising.
GPU Acceleration and Multi-node Proving
The parallelization of proof generation on GPUs is a critical factor on the path to real time. Some teams are already achieving proof times of less than 30 seconds per Ethereum block with pipelines fully optimized for CUDA. Latency can be further reduced by using multi-node clusters that coordinate the calculation of individual proving segments and then aggregate them efficiently.
Special attention is being paid to recursion – the cryptographically secure merging of multiple partial proofs into a compact final proof. Advances in this area – such as reducing the aggregation time to less than 150 milliseconds – demonstrate the potential of zkVMs to become a universal, fully verifiable computer model.
Related article: Vitalik Buterin’s vision: The future of Ethereum in the next 10 years
Open Source and Formal Verification as new Standards
In addition to performance, the trustworthiness of proving systems is increasingly becoming a focus. Many teams publish their source code, including GPU kernels and compiler chains, in full under open source licenses. At the same time, external audits and formal verification processes are carried out to guarantee the security of the arithmetic, constraint systems, and hash functions used.
Outlook
Advances in zkVMs mark a turning point in the history of Ethereum. The transition from re-executing nodes to verifying clients, from asynchronous rollups to synchronous state transitions, and from classic EVM models to modular, formally secured VM stacks is imminent. Real-time proving, hardware independence, privacy features, and massive performance gains are no longer theoretical concepts—they are a tangible reality in 2025.
The Ethereum ecosystem is thus entering a new phase: a phase in which transparency, efficiency, and security no longer compete, but advance together.
Advances in PeerDAS: Status Quo and Technical Developments
PeerDAS (Peer Data Availability Sampling) is a central element in the scaling and stabilization of Ethereum. Technical progress, challenges, and strategies for further implementation are discussed in regular developer meetings. This article summarizes the current developments of various client teams and highlights key technical concepts that are crucial for the operation of a decentralized data distribution system.
Synchronization Problems and their Solution
A key topic in recent developments is the resolution of synchronization problems. Some client teams report edge cases where certain synchronization methods fail, particularly the so-called rain sync method. This method describes the retrieval of data across different peer sources. To address these issues, various fixes have been implemented and evaluated in test networks. The goal is to improve the reliability of data synchronization, especially in unstable networks or networks with low peer density.
Selective Subscription to Subnets
An important new feature is the ability to subscribe to specific data subnets. Previously, so-called supernodes had to subscribe to all subnets (even those without data relevance), which led to increased load. The introduction of a flag for selecting individual subnets now enables more targeted and resource-efficient network participation. This particularly improves the scalability and efficiency of nodes with limited resources.
Improved Strategies for Data Reconstruction
A major problem with synchronization can arise when not all the required data blocks (known as “columns”) are available. A new strategy is now being tested: If a complete data retrieval is not possible, the node attempts to reconstruct the required column from a subset of 64 arbitrary columns. This method is based on the assumption that the availability of a minimum amount of data is sufficient to restore the missing information with the appropriate computing power. This significantly increases the robustness of the network, especially under difficult network conditions.
Integration of Builder APIs and Interface Changes
Another key issue is the further development of the so-called builder API. This API makes it possible to obtain pre-built blocks from external sources (builders). In future iterations, the aim is to remove the return of blobs (larger data structures within the blocks) from this API in order to reduce the network load. In addition, consideration is being given to making the return of the execution part of a block optional. This should make processing more efficient, especially since many blobs are not returned in time anyway.
Changes to RPC Interfaces and Validation Rules
Changes have also been made at the Remote Procedure Calls (RPC) level. The behavior for invalid blob requests has been adjusted: Instead of an error, the system now returns a null value if the requested data structure does not exist or is no longer available. This change is particularly relevant for handling historical or obsolete data and is intended to increase client fault tolerance.
Preparation for Future Network Specifications
A recurring point of discussion concerns the so-called “blob schedules” – a kind of timetable for when which forks should become active in the network and which functions they should contain. The desire is to keep both the consensus and execution levels (CL and EL) synchronized. The specification of such schedules requires close coordination between the various client teams. Inconsistencies in naming conventions or indexing could lead to incompatibilities here.
Challenges in Test Network Architecture
The test infrastructure faces the challenge of having to test forks such as Electra or DevNet upgrades at very early stages of the network. The problem here is that the configuration of many test environments is based on states with “slot 0” (the start of the network), while new forks are planned much later. This results in a need for more flexible test scenarios or an adjustment of the test configurations.
Availability Issues and Security for Blob Data
A critical issue was raised regarding blob availability: Since blob data is not propagated directly but reconstructed from columns, there is a potential security risk in private transactions or targeted withholding. One of the proposed solutions is the introduction of a new, simplified API endpoint (getBlobs) that only returns reliable data from so-called “supernodes.” This should increase reliability in the event of an emergency, but the network still needs to be functional without such APIs—for example, through redundancy or reconstruction procedures.
In summary
The ongoing work on PeerDAS highlights the complexity and degree of innovation involved in scaling Ethereum. Advances in synchronization, more efficient interfaces, adaptive data reconstruction, and flexible testing infrastructures are key building blocks on the road to productive deployment. Many components are nearing production readiness, but with the next major network forks on the horizon, coordination between client teams remains crucial.
Multicall Simulation in Ethereum: Coordination, Specifications, and the Current Status
The multicall functionality within the Ethereum ecosystem is an essential tool for efficiently bundling multiple contract calls into a single network access. The coordinated development of such functions requires close cooperation and regular coordination among the parties involved. The recent technical meeting served to synchronize the ongoing work in the area of multicall simulation.
Author
Ed Prinz serves as chairman of https://dltaustria.com, the most renowned non-profit organization in Austria specializing in blockchain technology. DLT Austria is actively involved in educating and promoting the value and application possibilities of distributed ledger technology. This is done through educational events, meetups, workshops, and open discussion forums, all in voluntary collaboration with leading industry players.
Disclaimer
This is my personal opinion and not financial advice.
For this reason, I cannot guarantee the accuracy of the information in this article. If you are unsure, you should consult a qualified advisor whom you trust. This article does not make any guarantees or promises regarding profits. All statements in this and other articles are my personal opinion.