📢 Gate Square Exclusive: #WXTM Creative Contest# Is Now Live!
Celebrate CandyDrop Round 59 featuring MinoTari (WXTM) — compete for a 70,000 WXTM prize pool!
🎯 About MinoTari (WXTM)
Tari is a Rust-based blockchain protocol centered around digital assets.
It empowers creators to build new types of digital experiences and narratives.
With Tari, digitally scarce assets—like collectibles or in-game items—unlock new business opportunities for creators.
🎨 Event Period:
Aug 7, 2025, 09:00 – Aug 12, 2025, 16:00 (UTC)
📌 How to Participate:
Post original content on Gate Square related to WXTM or its
Technical interpretation of Merlin's operating mechanism
Author: Faust, geek web3
From the summer of inscriptions in 2023 to the present, Bitcoin Layer 2 has always been the highlight of the entire Web3. Although the rise of this field is much later than that of Layer 2 in Ethereum, with the unique charm of POW and the smooth landing of Spot ETF, the Bitcoin without considering the risk of "securitization" has attracted the attention of tens of billions of dollars of capital for the derivative track of Layer 2 in just half a year.
In the Bitcoin Layer 2 track, Merlin, which has billions of dollars in TVL, is undoubtedly the one with the largest volume and the most long followers. With clear staking incentives and a decent yield, Merlin sprung up almost in a matter of months, creating an ecological myth that transcends Blast. With the increasing popularity of Merlin, the discussion of its technical solutions has become an increasingly long topic.
In this article, Geek Web3 will focus on Merlin Chain's technical solutions, interpret its published documents and protocol design ideas, and we are committed to letting more long people understand Merlin's general workflow and have a clearer understanding of its security model, so that everyone can understand how this "head Bitcoin Layer 2" works in a more intuitive way.
Merlin's decentralized oracle network: an open off-chain DAC council
For all Layer 2s, whether it is Ethereum Layer 2 or Bitcoin Layer 2, DA and data publishing costs are one of the most important problems to be solved. Due to the longest problems of the Bitcoin network itself, which inherently does not support large data throughput, how to use this DA shorts has become a difficult problem to test the imagination of Layer 2 projects.
One conclusion is obvious: if Layer 2 "directly" publishes unprocessed transaction data to the Bitcoin Block, it will not be able to achieve high throughput or low fees. The most popular solution is either to compress the data size as small as possible through high compression and upload it to the Bitcoin Block, or to publish the data directly on the Bitcoin off-chain. **
Probably the most well-known of the Layer 2 Layers that take the first approach is Citrea, which intends to upload the state diff of Layer 2 over a period of time, i.e., the results of the state change on long account, along with the corresponding ZK proofs, to the Bitcoin on-chain. In this case, anyone can download the state diff and ZKP from the Bitcoin Mainnet to monitor the results of the Citrea state change. This method can reduce the size of the data on the chain by more than 90%.
While this can greatly reduce the size of the data, the bottleneck is still significant. If a large number of account state changes occur in a short period of time, Layer 2 needs to summarize and upload all the changes of these account to the Bitcoin on-chain, and the final data release cost cannot be kept very low, which can be seen in the very long Ethereum ZK Rollup.
It is very long Bitcoin Layer 2 to simply take the second path: directly use the Bitcoin off-chain DA solution, either build a DA layer by itself, or use Celestia, EigenDA, etc. B^Square, BitLayer, and Merlin, the protagonist of this article, all follow this off-chain DA scaling scheme.
In Geek web3's previous article - "Analyzing the New Version of B^2 Technology Roadmap: The Necessity of Bitcoin off-chain DA and Verification Layer", we mentioned that **B^2 directly imitates Celestia and builds a DA network that supports data sampling function in the off-chain, named B^2 Hub. "DA data" such as transaction data or state diff is stored in the Bitcoin off-chain, and only datahash/merkle root is uploaded to the Bitcoin Mainnet. **
It's really treating Bitcoin as a Trustless bulletin board: anyone can read datahash from Bitcoin on-chain. When you get DA data from the off-chain data provider, you can check whether it corresponds to the on-chain datahash, that is, hash(data1) == datahash1 ?. If there is a correspondence between the two, it means that the data provider under the off-chain gave you the right data.
The above process can ensure that the data provided to you by off-chain Node is associated with certain "clues" on Layer 1, preventing the DA layer from maliciously providing false data. But there is a very important pin scenario here: what if the source of the data, Sequencer, does not send the corresponding data of the datahash at all, but only sends the datahash to the Bitcoin on-chain, but deliberately withholds the corresponding data from anyone to read it?
Similar scenarios include, but are not limited to, only publishing ZK-Proof and StateRoot, but not publishing the corresponding DA data (state diff or transaction data), although people can verify that the ZKProof calculation process is valid, and make sure that the calculation process from Prev_Stateroot to New_Stateroot is valid, but they do not know which account state (state) has changed. In this case, although the user's assets are safe, you can't determine the actual state of the network at all, and you don't know which transactions have been packaged on the chain and which contracts have been updated.
This is actually "data withholding", and Dankrad of the Ethereum Foundation briefly discussed a similar issue on Twitter in August 2023, of course, he is mainly long wick candle for something called "DAC".
Longest Ethereum Layer2, which adopts off-chain DA solutions, often sets up several nodes with special permissions to form a committee, the full name of Data Availability Committee (DAC). This DAC committee acts as a guarantor, claiming that Sequencer does publish the full DA data (transaction data or state diff) off-chain. Then the DAC Node collectively generates a longer, as long as the longest meets the threshold requirements (such as 2/4), the relevant contract on Layer 1 will default, and Sequencer has passed the inspection of the DAC committee and truthfully released the complete DA data off-chain.
The DAC committee of Ethereum Layer 2 basically follows the POA model, allowing only a few KYC or officially designated nodes to join the DAC committee, which makes DAC synonymous with "centralized" and "consortium blockchain". In addition, in some of the Ethereum Layer 2 that adopts the DAC model, the sequencer only sends DA data to DAC member Nodes, and almost never uploads data elsewhere, and anyone who wants to obtain DA data must obtain the permission of the DAC committee, which is not fundamentally different from the Consortium Blockchain.
There is no doubt that DAC should be Decentralization, and Layer 2 can not upload DA data directly to Layer 1, but the access authority of the DAC committee should be open to the outside world to prevent a few people from colluding to do evil. (For a discussion of the DAC's mischief scenario, please refer to Dankrad's previous statement on Twitter)
**BlobStream, previously proposed by Celestia, is essentially to replace centralized DAC with Celestia, **Ethereum L2 sequencer can publish DA data to the Celestia on-chain, if 2/3 of the Celestia node signs it, the Layer2 exclusive contract deployed on Ethereum believes that the sequencer truthfully releases DA data, which is actually to let the Celestia Node act as a guarantor. Considering that Celestia has hundreds of validator nodes, we can consider this large DAC to be relatively decentralization.
**The DA solution used by Merlin is actually close to Celestia's BlobStream, which opens up the access rights of DAC in the form of POS to make it tend to be decentralization. Anyone can run a DAC Node as long as they stake enough assets. In Merlin's documentation, the above DAC Node is referred to as Oracle, and it is pointed out that asset staking of BTC, MERL and even BRC-20 Tokens will be supported, enabling a flexible staking mechanism, as well as proxy staking similar to Lido. (Oracle Machine's POS stake protocol is basically one of Merlin's next core narratives, and the stake Intrerest Rate provided are relatively high)
Here is a brief description of Merlin's workflow (picture below):
Oracle Node special processing of its calculation process to verify ZK Proof, generate a Commitment commitment, send it to the Bitcoin on-chain, and allow anyone to challenge the "commitment", and the process in this process is basically the same as the fraud proof protocol of bitVM. If the challenge is successful, the Oracle Node that publishes the Commitment will be financially penalized. Of course, the data that Oracle wants to publish to the Bitcoin on-chain, including the hash of the current Layer 2 state - StateRoot, and the ZKP itself, must be published to the Bitcoin on-chain for the outside world to detect.
There are still a few details that need to be elaborated, first of all, Merlin's roadmap mentions that in the future, Oracle will back up DA data to Celestia, so that Oracle Node can properly eliminate local historical data and do not need to keep the data locally forever. At the same time, the Commitment generated by Oracle Network is actually the root of a Merkle Tree, and it is not enough to disclose the root to the outside world, but to disclose all the complete datasets corresponding to the Commitment, it is necessary to find a third-party DA platform, which can be Celestia, EigenDA, or other DA layers.
Security Model Analysis: Optimistic ZKRollup+Cobo's MPC Service
Above we have briefly described Merlin's workflow, and I believe you already have a good grasp of its basic structure. It is not difficult to see that Merlin basically follows the same security model as B^Square, BitLayer, and Citrea - the optimistic ZK-Rollup.
The first reading of this word may make many long Ethereum enthusiasts feel weird, what is the "optimistic ZK-Rollup"? In the cognition of the Ethereum community, the "theoretical model" of ZK Rollup is completely based on the reliability of Cryptography calculations, and there is no need to introduce trust assumptions, and the word optimism precisely introduces trust assumptions, which means that people should be optimistic that Rollups are not wrong and reliable when they long a large number of times. And once there is an error, the Rollup operator can be punished by fraud proof, which is the origin of the name Optimistic Rollup, also known as OP Rollup.
For the Ethereum ecosystem of Rollup's home base, the optimistic ZK-Rollup may be a bit unusual, but this is exactly in line with the current situation of Bitcoin Layer 2. Due to technical limitations, Bitcoin on-chain can not fully verify ZK Proof, can only verify a certain step of ZKP calculation process under special circumstances, under this premise, Bitcoin on-chain can actually only support fraud proof protocol, people can point out that ZKP in the off-chain verification process, a certain calculation step has an error, and through the fraud proof way to challenge, of course, this can not be compared with the Ethereum-style ZK Rollup, but it is Bitcoin already the most reliable and The most robust security model.
Under the above optimistic ZK-Rollup scheme, assuming that there are N authorized challengers in the Layer 2 network, as long as 1 of these N challengers is honest and reliable, and can detect errors and initiate fraud proof at any time, the state transition of Layer 2 is safe. Of course, optimistic rollups with a relatively high degree of completion need to ensure that their withdrawal bridges are also protected by fraud proof protocol, and at present, almost all Bitcoin Layer 2 cannot achieve this premise and need to rely on long signature/MPC, so how to choose the long signature/MPC solution has become a problem closely related to the security of Layer 2.
Merlin chose Cobo's MPC service on the bridge scheme, using measures such as cold and hot wallet isolation, the bridges assets are jointly managed by Cobo and Merlin Chain, and any withdrawal needs to be handled jointly by the MPC participants of Cobo and Merlin Chain, essentially ensuring the reliability of the withdrawal bridge through the credit endorsement of the institution. Of course, this is only a stopgap measure at this stage, and with the gradual improvement of the project, the withdrawal bridge can be replaced by the "optimistic bridge" of the 1/N trust assumption by introducing BitVM and fraud proof protocol, but it will be more difficult to land (at present, almost all Layer2 official bridges rely on long sign).
On the whole, we can sort out that Merlin has introduced a POS-based DAC, an optimistic ZK-Rollup based on BitVM, and an MPC asset custody solution based on Cobo, solved the DA problem by opening DAC permissions, ensured the security of state transition by introducing BitVM and fraud proof protocol, and ensured the reliability of the withdrawal bridge by introducing the MPC service of the well-known asset custody platform Cobo.
Lumoz-based two-step verification ZKP submission scheme
Earlier, we combed through Merlin's security model and introduced the concept of optimistic ZK-rollup. In Merlin's technology roadmap, Decentralization Prover is also discussed. As we all know, Prover is a core role in the ZK-Rollup architecture, which is responsible for generating ZKProofs for batches released by Sequencer, and the generation process of zk-SNARKs is very hardware resource intensive and a very tricky problem.
To speed up the generation of ZK proofs, parallelizing the task is one of the most basic operations. **The so-called parallelization is actually to divide the ZK proof generation task into different parts, which are completed separately by different Provers, and finally the Aggregator aggregator aggregates the longest Proof into a whole.
In order to speed up the generation process of ZK proofs, Merlin will use Lumoz's Prover as a service solution, which is actually to gather a large number of hardware devices together to form a mining pool, and then assign computing tasks to different devices and assign corresponding incentives, similar to POW mining.
In this Decentralization Prover scheme, there is a class of attack scenarios, commonly known as front-running attacks: Suppose an aggregator Aggregator has formed a ZKP, and it sends the ZKP out in the hope of receiving a reward. After other aggregators saw the content of ZKP, they rushed to post the same content in front of him, claiming that this ZKP was made by their own husband, how to solve this situation?
One of the most instinctive solutions that may come to mind is to assign a specific task number to each Aggregator, for example, only Aggregator A can take task 1, and everyone else will not get a reward even if they complete task 1. But one of the problems with this approach is that it doesn't protect against a single point of risk. If Aggregator A has a performance failure or disconnects, Task 1 will be stuck and unable to complete. Moreover, this practice of assigning tasks to a single entity is not a good way to improve productivity with competitive incentives.
Polygon zkEVM has proposed a method called Proof of efficiency in a blog post, which states that different Aggregators should be promoted to compete with each other in a competitive way, and that incentives should be distributed on a first-come, first-served basis, and that the first Aggregators to submit ZK-Proof to the chain can receive rewards. Of course, he didn't mention how to solve the MEV front-running problem.
Lumoz uses a two-step verification ZK proof submission method, after an Aggregator generates a ZK proof, it does not need to send out the complete content, but only publishes the hash of ZKP, in other words, publishes the hash (ZKP+Aggregator Address). In this way, even if others see the hash value, they don't know the corresponding ZKP content and can't directly rush it;
If someone simply copies the entire hash and publishes it first, it makes no sense, because the hash contains the Address of a specific aggregator X, and even if aggregator A publishes the hash first, when the original image of the hash is revealed, everyone will see that the aggregator Address contained in it is X, not A.
Through this two-step verification ZKP submission scheme, Merlin (Lumoz) can solve the front-running problem in the ZKP submission process, and then realize highly competitive zk-SNARKs generation incentives, thereby improving the generation speed of ZKP.
Merlin's Phantom: longest chain interoperability
According to Merlin's technical roadmap, they will also support interoperability between Merlin and other EVM chains, and its implementation path is basically the same as the previous Zetachain idea, if Merlin is used as the source chain and other EVM chains are used as the target chain, when the Merlin Node perceives the cross-chain interoperability request made by the user, it will trigger the subsequent workflow on the target on-chain.
For example, an EOA account controlled by the Merlin network can be deployed on Polygon, ** When a user publishes a cross-chain interoperability instruction on Merlin Chain, the Merlin network first parses its content and generates a transaction data executed on the target on-chain, and then the Oracle Network MPC signature processing on the transaction generates a digital signature of the transaction. Merlin's Relayer Node then releases the transaction ** on Polygon, completing subsequent operations through Merlin's assets in the EOA account on the target on-chain.
When the operation required by the user is completed, the corresponding asset will be forwarded directly to the user's address on the target on-chain, and theoretically can also directly cross into the Merlin Chain. This solution has some obvious advantages: it can avoid the wear and tear of fees generated by traditional asset cross-chain and cross-chain bridges contracts, and it is directly guaranteed by Merlin's Oracle Network to ensure the security of cross-chain operations, and no longer need to rely on external infrastructure. As long as users trust Merlin Chain, there is no problem with defaulting to such cross-chain interoperability.
Summary
In this article, we give a brief explanation of the general technical solution of Merlin Chain, which is believed to help more long people understand the general workflow of Merlin and have a clearer understanding of its security model. Considering the current Bitcoin ecology in full swing, we believe that this kind of technical science popularization behavior is valuable and needed by the general public, We will carry out long-term follow-up on Merlin and bitLayer, B^Square and other projects in the future, and conduct a more in-depth analysis of its technical solutions, so stay tuned!