Modular Interoperability Protocols

5 min readMar 6, 2023
Graphic originally from Celestia

Seems like these days everything is modular. Blockchains are modular. Banking is modular. My girlfriend last week tried to explain to me that our relationship could be modular too (the love remains the same, but swapping out different partners).

Turns out the cross-chain problem isn’t immune to modularisation either.

You heard it — interoperability protocols are now modular too.

Why modularise?

Modularisation leads to optimisation. If you break up a monolithic stack into its component pieces, you can swap out specific components for a more souped up version.

Instead of being mid at a lot of things with a monolithic stack, you can have — in composite — a lot of individual goodly things with a modular stack.

Take a blockchain for example — a monolithic chain is limited in its functionality due to its need to balance scalability, security, and decentralisation.

We call this the sCabILiTy TriLeMma.

That’s because the monolithic stack has to be mid in one of the three dimensions. A super duper high throughput blockchain that’s secure usually means it’s running on beefy nodes.

On the other end, a decentralised and secure blockchain has to be slow as shit.

It is as dictated by the trilemma.

But by modularising a blockchain into its constituent parts (execution, DA, and settlement), we liberate ourselves from the authoritarian gaze of the scalability trilemma.

Now, rollups (i.e., execution layers) can be super duper fast while still inheriting the decentralisation and security of its underlying settlement layer. This is the endgame of systems with centralised block production and decentralised block verification.

Rollup + settlement layer (with enshrined DA in this case) = a composite system that’s goodly in all three dimensions.


How modularise?

OK, so now that you’re modular-pilled, let’s dive into how an interoperability protocol can be modular.

Like blockchains, interoperability protocols are comprised of three disparate parts:

  1. Application: Interpreting data in a standard schema
  2. Verification: Insuring the validity of the data being passed
  3. Transport: Moving the data from one domain to another

Like blockchains — interoperability protocols can overcome existing limitations by modularising the stack and swapping for hyper-optimised parts.

Interoperability protocols are also constrained to a trilemma that Connext calls the Interoperability Trilemma.

While it admittedly didn’t catch on quite as well as the Scalability Trilemma, it paints a correct picture: interoperability protocols have to trade off security (they call Trustlessness) for time-to-market (Extensibility).

For example, a multi-signature TSS interoperability protocol (Team Human from my previous post) can more easily expand to different blockchains than a zk-SNARK light client interoperability protocol (Team Math) because the overhead is less: the former requires signers to watch a new Outbox contract, while the latter requires new zk circuits to be created for every new light client implementation.

My take on the tradeoffs of interoperability protocols. From my talk in Interop Summit in Denver.

Again we see a monolithic stack can only be goodly at one thing and has to be mid at the other. But by modularising the interoperability protocol, it can start being goodly at both!

Modular verification

For example, allowing for the verification layer to be modular means that an interoperability protocol can be easily extended to new chains by being Team Human/Team Economics — and over time, the interoperability protocol can be more secure by adding in optimistic verification (Team Game Theory) or native verification via light clients.

From my talk in Interop Summit in Denver.

To date, this is the most adopted form of modularisation of the interoperable protocol stack.

We see teams like LayerZero and Router Protocol adopt app-configurable verification parameters. For LayerZero, applications can choose their oracle-relayer pair for the Ultra Light Node.

For Router Protocol, apps can opt into using their own external validators (perhaps using EigenLayer? o_O) in addition to the Router Chain’s validators in order to verify a transaction.

In a different vein, Hyperlane and Orb Labs offer different security modules using various verification methods — from multisignature (Team Human), PoS (Team Economics), and to optimistic verification (Team Game Theory).

Connext’s Amarok upgrade and Hop Protocol v2 both modularise their verification layer by outsourcing verification to canonical bridges for L2 to L2 swaps — with interest in integrating other interoperability protocols over time.

Modular transport

A modular transport layer is a relatively new concept compared to that of a modular verification layer.

The benefit of a modular transport layer is interoperability… which — I know — seems kind of recursive.

But hear me out — to date, every interoperability protocol has used their own transport layer (i.e., their own routers). Even modular interoperability protocols like Connext and Hyperlane have their own routers and router specifications.

Thus, Connext and Hyperlane cannot use the routers of each other’s protocols. As a result, they’re not interoperable with one another.

Polymer Labs is the only team so far that has modularised the transport layer. Instead of a proprietary router specification, Polymer leverages IBC for its transport layer.

Chains can outsource their IBC transport layer to Polymer (using multi-hop channels connecting domains) — and also use the Polymer optimistic or zk-SNARK light client implementation for the modular verification layer.

As a result, Polymer is an interoperable interoperability protocol.

Polymer-enabled chains (e.g., Ethereum) are able to interoperate with other IBC-enabled chains like native chains on Cosmos — as well other ecosystems that IBC is expanding to like Near and Polkadot.

As an added benefit, Polymer also inherits the robust application-level specifications of IBC — ICS standards like ICS-20 (fungible tokens), interchain accounts, and interchain queries — instead of re-creating them from scratch on a monolithic stack.

In conclusion

I’ve previously talked about what the end-state may be for interoperability protocols, as also discussed how there will be millions of blockchains in the near future.

In a world of proliferating modular chains — rollups, app-rollups, dapp chains, L3s, RollAps, chainlets, or whatever you want to call them — we need build extensible, permissionless, and automatic infrastructure in order to enable the widespread usage of these chains.

The power of modular interoperability protocols is that no tradeoffs need to be made.

A modular interoperability protocol can provide all the necessary qualities for all the new chains — without compromising security in the long run.

What comes next after the maturation of modular interoperability?

Something that I’ll be tackling in my next endeavor (soonTM).


Thank you for the engaging and thought-provoking conversations with the Polymer Labs, Hyperlane, Orb Labs and Connext teams.

Thank you to Bo and Tina for reviewing my work. Y’all the real gigabrains.





Product manager, DAO contributor, crypto enthusiast