In which mayday mayday we are syncing about*

Don’t trust, terrify!

The “don’t trust, verify!” slogan is beyond my comprehension. I board airplanes without verifying anything about the pilot or the aircraft; I visit restaurants and foolishly eat — without verifying what will be transmitted to my blood; I take medicines without verifying the supply chain. Why would I protect my money with measures I am not taking to protect my life?!

Why not fiat?

Because fiat supply is inflated unpredictably; because the political printing of money is corrupting the democratic sphere, positing the political debate on allocation of money instead of creation of wealth; because regulators are attempting to control financial transfers, eliminate cash, restrict economic freedom.

Why not Bitcoin?

Initial Blockchain Download (IBD) is the process by which new nodes join the network. Since Bitcoin core devs’ ethos is “don’t trust, verify!”, the default behaviour of the new node is to download and verify the entire history of the Bitcoin ledger. Consequently, Bitcoin’s throughput is deliberately limited to enable fast IBD-indeed, processing too many transactions per second today would make it difficult for a user joining in 2040 to verify that today’s transactions were valid. I kid you not.

Why Kaspa?

First, to make Satoshi great again. PHANTOM is really a neat generalization of Nakamoto Consensus (when k=0 PHANTOM coincides with the longest chain rule), it follows the same principles just with support for concurrency. It is Satoshi at its best, and the only path to fulfil His electronic cash vision. We make DAGs because we know how to and because no-one else does. We implement PHANTOM because we want to ping with “send txn” and be ponged “txn mined” in the same manner we get the results of a google search or send an email. We picked up this challenge in the same way Bitcoin core devs chose to work on Taproot — it is cool and not entirely useless.

Make Satoshi Great Again

My not-novel suggestion to scale up Nakamoto Consensus:

  • CPU consumption — process few transactions per second on L1, while supporting large payloads, which are cheap CPU-wise, and which enable easy and healthy L2 (e.g., SN/TARK proofs for ZKRUs).
  • Bandwidth consumption — design sharding of data and data availability proofs, similar to Eth 2.0 stopped after phase 1 (see Ethereum’s rollup-centric revisited roadmap); this is an open research question, since PoW has no native identities to serve as the basis for sharding.
  • Memory consumption and disk I/O — implement class group based accumulators that require no trusted setup, and which allow to prune the UTXO set and run as a stateless client. Challenges include: UX of storing and updating the witnesses; weighing memory save against higher CPU consumption.
  • Storage — prune block data, reducing storage requirement from O(block header size)*O(num of blocks in history) + O(block size)*O(size of history) to O(block header size)*O(num of blocks in history) + O(block size)* O(size of pruning window); additionally, consider pruning block headers, further reducing the requirement to O(block size)*O(size of pruning window). Pruning block headers is an open research question, since it is then unclear how a new node will be guaranteed it is syncing to the consensus state and not to a stale or malicious branch. However, arguably, any system with (deterministic) finality enjoys/suffers/relies on weak subjectivity, and therefore reading the entire history of PoW might be redundant.
  • IBD time — implement DAG-adapted version of FlyClient to reduce the cost of syncing a new node from O(num of blocks in history) to O(log(num of blocks in history)). This does not reduce the storage at the syncer, but does allow the syncee to sync w/o downloading the entire history of block headers.

What are you syncing about?

Concluding today’s topic. Kaspa is PoW on steroids. It is optimized for the informed users, not for the ideologs. Its throughput ought be constrained by real time performance considerations, not by the performance of downloading and verifying the historical ledger, which is an auxiliary trust gateway, but not the primary pillar of trust in the system.

GHOST protocol (Ethereum), CS postdoc @ Harvard

GHOST protocol (Ethereum), CS postdoc @ Harvard