VIEW ONLY: vara-eth testnet re-genesis#5341
VIEW ONLY: vara-eth testnet re-genesis#5341grishasobol wants to merge 2 commits intovara-eth-testnet-5e1092edcfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request implements a re-genesis mechanism for the vara-eth testnet. It provides the necessary tools to capture a snapshot of the current chain state—including programs, codes, and storage blobs—and subsequently initialize a new database instance from this snapshot. This functionality is critical for resetting the network state while preserving existing programs and their data. Highlights
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces re-genesis functionality for the ethexe layer, enabling the chain to be bootstrapped from a state dump. Key additions include a new dump CLI command for creating and inspecting state snapshots, a reGenesis function in the Router Solidity contract, and logic to restore programs, codes, and scheduled tasks from these dumps during database initialization. My review identified several areas for improvement: the dump command may fail due to database locking if run on a live node, the reGenesis contract function lacks a critical event for off-chain tracking, the unsafe keyword is inappropriately used for the memory() database constructor, and the JSON serialization of state blobs is highly memory-inefficient and poses an OOM risk for large dumps.
| let rocks_db = RocksDatabase::open( | ||
| self.db | ||
| .clone() | ||
| .or_else(|| self.params.node.as_ref().map(|node| node.db_dir())) | ||
| .context("missing database path")?, | ||
| ) | ||
| .context("failed to open database")?; |
There was a problem hiding this comment.
RocksDatabase::open typically acquires an exclusive lock on the database directory. If this command is executed while the node is running, it will fail with a locking error. To support creating state dumps from a live node, consider opening the database in read-only mode (e.g., using RocksDatabase::open_read_only if the wrapper supports it).
| function reGenesis(Gear.GenesisBlockInfo memory newGenesis) public onlyOwner { | ||
| Storage storage router = _router(); | ||
| router.genesisBlock = newGenesis; | ||
| router.latestCommittedBatch = Gear.CommittedBatchInfo({hash: bytes32(0), timestamp: 0}); | ||
| } |
There was a problem hiding this comment.
The reGenesis function performs a critical state transition by resetting the genesis block and the latest committed batch. It is highly recommended to emit an event (e.g., event ReGenesis(Gear.GenesisBlockInfo newGenesis)) to allow off-chain components like observers and indexers to detect and handle this transition correctly without relying on polling storage.
| #[cfg(not(feature = "mock"))] | ||
| pub unsafe fn memory() -> Self { | ||
| Self::memory_inner() | ||
| } |
There was a problem hiding this comment.
The memory() function is marked as unsafe when the mock feature is disabled, but it does not appear to perform any memory-unsafe operations. In Rust, unsafe should be reserved for memory safety. If the intention is to warn users against using this in production, consider using a more descriptive name like unstable_memory(), adding #[doc(hidden)], or using a specific feature gate instead of overloading the unsafe keyword.
| fn serialize_blobs<S: Serializer>(blobs: &Vec<Vec<u8>>, serializer: S) -> Result<S::Ok, S::Error> { | ||
| let encoded = blobs.encode(); | ||
|
|
||
| let mut compressed = Vec::new(); | ||
| let mut encoder = DeflateEncoder::new(&mut compressed, Compression::default()); | ||
| io::Write::write_all(&mut encoder, &encoded).map_err(serde::ser::Error::custom)?; | ||
| encoder.finish().map_err(serde::ser::Error::custom)?; | ||
|
|
||
| let hex = format!("0x{}", hex::encode(&compressed)); | ||
| serializer.serialize_str(&hex) | ||
| } |
There was a problem hiding this comment.
The custom JSON serialization for blobs is very memory-intensive. It SCALE-encodes the entire vector, compresses it, and then hex-encodes it into a single string using format!. For large state dumps, this can lead to massive memory allocations (potentially several times the size of the actual data) and OOM issues. Since StateDump already implements Encode/Decode for binary format, consider if such an inefficient JSON representation is necessary, or use a more memory-efficient hex encoding approach.
| function reGenesis(Gear.GenesisBlockInfo memory newGenesis) public onlyOwner { | ||
| Storage storage router = _router(); | ||
| router.genesisBlock = newGenesis; | ||
| router.latestCommittedBatch = Gear.CommittedBatchInfo({hash: bytes32(0), timestamp: 0}); |
There was a problem hiding this comment.
I'd prefer different approach: reset router.genesisBlock and router.latestCommittedBatch and set the current block height as we do in the constructor. Then, anyone should call lookupGenesisHash in next block. This prevents the admin from setting any block hash they want.
|
No more needed - test-net was updated |
No description provided.