Awesome Month at Braidpool under SoB !
cd /usr/sansh2356/braidpool/src/node && cargo test
On your mark, get set, Hash!
Hi, I’m Ansh, a third-year Computer Science Engineering student at UIET, Panjab University. If you’ve been following my previous blogs on Summer of Bitcoin (SoB), you already know about my project with Braidpool. In this post, I’ll be covering the progress I’ve made over the past month—an exciting and enriching journey filled with deep technical dives, networking protocols, and decentralized architecture.
My primary goal is to build a fully functional Braidpool node to support enhanced decentralized mining. This includes implementing the IPC layer, robust P2P communication, and network synchronization between nodes. Key components include syncing of Braid structures, RPC methods to fetch Braidpool metadata, and implementing consensus algorithms in Rust.
The initial week was dedicated to familiarizing myself with the existing codebase and syncing with my mentor, Bob McElrath. I spent time reviewing documentation and identifying blockers in my workflow. One of the early realizations was the significance of IPC-based communication for lower latency as opposed to traditional HTTP-based RPC. This understanding prompted me to alter the Bitcoin codebase and start working on the networking stack necessary for P2P communication within the Braidpool network.

Libp2p Stack & Networking Subsystems
I began by setting up the `libp2p` networking stack with the following protocols:
- QUIC – for transport
- NOISE – for encrypted handshakes
- KADEMLIA – for peer discovery
- DNS – for address resolution
The `Swarm` was initialized to handle asynchronous events via `Tokio`, and `Kademlia` was implemented to support peer discovery using XOR metrics. Inspired by Bitcoin Core's seed node mechanism, I set up a remote boot node server at french.braidpool.net
and added both A and AAAA DNS records for seamless IPv4/IPv6 resolution.
During this process, I also enabled keystore generation in the ~/.braidpool
directory, storing `.pem` files for local keypair creation.
PR#1 – Networking Stack & Bootnode Setup
Braidpool Config & Custom Initialization
Once the networking foundation was in place, I worked on the `braidpool.config` logic and pool configuration framework. This ensures that users or miners can easily configure and initialize a node based on their local setup. Unit tests were included to validate every configuration path.
PR#2 – Continued: Configuration Implementation
Bead Sync & Peer Selection Logic
In collaboration with @abdaze, I contributed to the request-response functionality for the `bead_sync` protocol, which supports Initial Braid Download (IBD). I specifically focused on refining the peer selection criteria by evaluating:
- Subnet diversity
- Round-trip time (via the `ping` protocol)
- Randomization strategies (to prevent eclipse attacks)
- GeoIP-based distribution for node fairness
This logic closely aligns with best practices from the APTO protocol and Bitcoin Core’s 8-peer outbound connection policy.
PR#3 – Bead Sync, Peer Selection & Security Enhancements
Braid Data Structures & Serialization
I worked extensively on the base architecture of the braid-node, focusing on the `Bead` structure which encapsulates both committed and uncommitted metadata. I implemented custom `Encodable` and `Decodable` traits for seamless serialization/deserialization over both RPC and IPC layers.
This also included creating a robust unit-testing framework covering edge cases and various serialization flows. In addition, I revamped the `CpuNet` fork used for network testing and updated its Nix-based installation scripts.
PR#4 – Bead Structures & Serialization
PR#5 – CpuNet Refactor & Nix Setup
Bead Broadcasting with Floodsub
To optimize bead announcement propagation, I implemented the `Floodsub` protocol for broadcasting valid beads. These beads are created when a miner finds a partial work solution (weaker than mainnet difficulty but above the Braidpool’s minimum). We benchmarked this mechanism using the Criterion crate to assess its performance.
Unit tests using `tokio::test` confirmed reliable bead propagation across a test mesh.
Me and my mentor opted for Floodsub implementation instead for the bead-announce i.e. whenever a ASIC will provide a particular suitable values of roll un attributes such as Version,Nonce Extranonce and Time mentioned in BIPs for increasing the overall hashing space for generating a suitable/valid bead/weak-workshare having a local difficulty which is lesser than that of the mainner but greater than that of the MINIMUM_TARGET_DIFFICULTY which comes as a result of simulation ran over by our mentor , braid-node will construct the suitable candidate bead and broadcast that valid bead over the braidpool-network.
PR#6 – Floodsub Bead Broadcasting
Consensus Algorithms
This part was central to the logical correctness of Braidpool. I implemented the consensus logic, focusing on:
- Ancestor-descendant relationships
- Sub-braid pruning logic for reduced memory usage
- Index-based optimization to lower request-response latency
I also implemented the calculation of “Cohorts”—sub-graphs separated by cuts, ensuring robust braid integrity during chain selection. This involved generating random braid structures and running a suite of tests across the following functions:
- Genesis creation
- Child bead generation
- Descendant work evaluation
- Sub-braid identification
- Cohort derivation
- Ancestor-finding algorithms
Since these functions were subjected for the purpose of consensus robust testing was taken in place by myself inlcuding a complete thorugh unit-testing across random braid structues generated can be found in the test_braid_directory including the implementation of random bead generation and competent testing for all the functions
We opted for a index based structure along with my team-mate @arima_kun(discord) in order to reduced the request-response time for handling the request and reducing the response generation time for a given braid-node implementation .
I opted for maximum restraint on memory overhead along with minimum time-complexity and space-complexity utilization for all the inherit consensus functions to minimize the latency prospects of braidpool .
PR#7 – Consensus Logic & Cohort Handling
NAT Traversal & Hole Punching (Relay Protocol)
In a decentralized mesh, nodes behind NAT or firewalls pose challenges. I attempted a full implementation of `hole-punching` using `dcutr` and `server-relay` protocols from `libp2p` to address this. Although the relay approach worked for some nodes, it proved non-scalable due to low bead interval frequency and potential DDoS vulnerabilities.
The `dcutr` hole-punching mechanism wasn't fully successful in all cases, which I reported to my mentor. For future iterations, we're considering tunneling approaches using SSH as a more stable alternative.
Complete Hole Punching + Relay Protocol Implementation
Summary of Pull Requests (PRs)
- PR#1 – Setup of libp2p networking stack, bootnode configuration, and local node initialization via `braidpool.config`.
- PR#2 – Request-response bead sync protocol, peer selection logic, and security enhancements.
- PR#3 – Core bead data structures, serialization/deserialization traits, and unit test coverage.
- PR#4 – CpuNet refactor – Sub-dependency updates and Nix-based testnet setup for local braid testing.
- PR#5 – Floodsub implementation for low-latency bead broadcasting and unit tests.
- PR#6 – Implementation of consensus logic, cohort calculation, pruning, and ancestor-descendant relationship handling.
- Auxiliary PR – Complete relay and hole punching protocol with libp2p, `dcutr`, and `autonat` in Rust.
Post Mid-term
- After mid-term i will be focussing on Stratum integration and API implementation that will be a crucial part on the aspect of increasing the securtity while message transferring to take place along with the mining-protocol for the distribution of `extra_nonce` along the downstream nodes and reducing latency.
- Working on the `deterministic-template` generation along with my team-mate @mstrr(discord) that will serve as the adhesive between all the above implmeneted logic and provide a complete cohesion between different functionalities to take place .
- Complete test coverage over the entire braidpool repository and concise and complete documentation for all possible functionalities present for future passing as well for for developers reference .
It has been an awesome journey thus far looking with extreme excitement to what further tasks behold with them Stay Tune for further updates!!