Thread
.@btcazores 2nd day 10am. Change of schedule, Lightning jamming rather than general Lightning topics. Antoine Riard and @ffstls have written a book on channel jamming jamming-dev.github.io/book/
In the worst case attacker can block a channel for 2 weeks. It hasn't been exploited at scale but it has been demonstrated. Attacker can't steal money but in a competitive routing fee market you could block your competitors' channels
An alternative strategy for attacking competitors' routing channels would be to force close their channels. Routing nodes are vulnerable to these attacks but not edge nodes that aren't seeking to receive routing fees.
No way to distinguish between jamming attacks and honest payments. A 20 second delay might be honest behavior, how to distinguish between honest behavior and a jamming attack?
DLCs and HODL invoices also display similar behavior to jamming attacks if capital locked up for long time.

The longer the timelock (CLTV delta) the longer you can be jammed. The shorter the timelock the more danger that you don't close your channel in time if attacked.
Loops of routes going through your node multiple times amplifies the attack. Could identify it as same hash and preimage is used across the whole route and you'd see it multiple times.
Do these jamming attacks kill the concept of PTLCs? The reusing of the same hash preimage across the whole route provides protection against jamming attacks. PTLCs wouldn't share the same hash and preimage across the whole route.
So solutions to jamming attacks. Three types of solutions: reputation, monetary type things and HTLC slots per range of value. Attacker can fill a slot with small amounts but not a slot defined for larger amounts.
Reputation: working out who the bad guys are over time based on behavior, performance.

Monetary: pay upfront fees based on how long you lock up capital in channel for. Currently fee only paid on success of routed payment rather than upfront.
Pay an upfront fee for even considering a HTLC. Now a routing fee could take the upfront fees but not consider actually routing that payment. Getting paid a fee for nothing.

How high should the upfront fee be to stop jamming attacks? Could be very low to prevent them.
Upfront fee could be ~0.02% of the corresponding success based fee based on simulations. Remember the attacker isn't stealing funds with jamming attacks.

Reputation may be better than fees. But jams can mimic normal payments and so reputation could be hurt with honest behavior.
Fee rates are announced by routing nodes and these are gossiped and cached by other nodes. The routing node can't change the fees once it has been included in a routed payment.
There is a research paper on jamming attacks, not yet published. You don't need many attempts to successfully pull off a jamming attack, easy to pull off. Upfront fees would add a third fee to be gossiped, already have base fee and proportional fee
Upfront fees could be scaled on the CLTV value of the HTLC. Multiple ways to architect the upfront fees, could be paid backwards along the route, not just forward.

How can we charge fees based on actual resolution time? Need to measure time accurately. So honesty required?
Onion routing makes it impossible to identify who the source of the jamming attack was. Could the protocol relax onion routing or agree to unwrap the onion when a jamming attack happens?
Local reputation of your peers could be determined by their ability to manage their own peers. Is your peer the source of the jamming attack or a peer of your peer? Could you prove that you weren't the source of the jamming attack?
Reputation should be local rather than global? Or perhaps both? You could send the reputation data you have of your peers to other peers.
.@btcazores 2nd day 11am. Tracing in Bitcoin Core w/ @jb55 and @0xB10C. Originally used to trace network activity in the Linux kernel. Any runtime function, any kernel function and see activity.
You can trace any userspace function without modifying any of the programs. Downside is userspace functions change all the time like in Bitcoin Core. eBPF/USDT addresses this downside.
This is not calling the function. If you hook into the tracepoint at runtime this allows you to hook utilities and visualizers into that point. No negative performance impact.
Tracepoints compiled in the Bitcoin Core release builds. If you use it overhead of calling the function but no negative performance impact if you don't use it.
You can write bad/expensive tracepoints and they would have a negative performance impact. But can compile without any tracepoints. How could you figure out whether a tracepoint is a good or bad tracepoint?
Tracepoints do require privileges e.g. sudo over say logging. When to use tracing versus logging? Tracing when you want to build custom monitoring tools that the machine is interpreting, logging and logging flags could be used for things that humans want to monitor
For monitoring RBF transactions, replacement of transactions in mempools. Use logging or tracing? How would reviewers determine whether a PR is a good or bad tracing PR?
Currently just get @0xB10C, @jb55 to review your PR. Probably need better documentation on how to write good tracepoints that don't negatively impact performance.
Don't want to maintain own fork of Bitcoin Core with custom logs so get a tracing PR merged instead? How do you decide which functions should be hooked into in the Core repo?
Example of a peer observer using tracepoints built by @0xB10C. @lopp building something similar with Grafana. demo.peer.observer/
If can agree on where the tracepoints should be in Core then this opens up huge design space for monitoring tools that don't need any further changes to Core.

Has been used to prove coin selection changes weren't pushing up overall fees in a simulation in a @josibake Core PR
Makes code uglier if tracepoints are littered all over the codebase. Any other downsides? Maintenance burden? Still need to justify what you will be using the tracepoint for. Once introduced, things built using them now have to be factored in, not to break them?
Would introduction of lots of tracepoints make it harder to refactor the code? Would the PR author need to know what applications are relying on that tracepoint? Would functional tests pick up breaking of downstream monitoring projects?
Should release notes document when tracepoints are no longer supported? Would they be second class citizens with no guarantees of future support of tracepoints? Need better docs and guidance on how tracepoints will be dealt with and maintained on the project.
Need people in the room with Rust and BDK installed to participate in the room multisig. One script will be simple n-of-n, the other will have a recovery clause in the script.
Using BDK for key generation. Has a trait for generating keys which asks for a context. Step 1: generate a WIF, we are using testnet. Step 2: print the xpub to the stdout. Using rust-bitcoin to create the public key from the private key. Step 3: Save the WIF in a file
Then write the private key into a file.

cargo run --bin generate_keys

Going to share public keys in the Telegram group. Just a public key not an extended public key.
Share the public key in the Telegram group if you are following along even if you aren't here in person :) t.me/+N0P4t1q6vWc0YTI0
Now going to generate the descriptor. Just one person is needed to generate the descriptor. A recovery private key is generated. Dummy key being put in the Taproot key path. Multisig and recovery path in the script path Taproot tree. Timelock on the recovery script path.
Copy and pasting public keys from Telegram group into the BDK code. Have a vector of strings and putting them together. We want one long string with commas in between the keys. Now creating the two policies that are human readable.
First policy is a thresh(), going to require half of the people who have shared their public key in the Telegram group to sign.

Second will use recovery private key.
Use Rust objects instead of strings? When using strings it doesn't recognize if you have copy and pasted the string wrong, missed a character or two etc.

Now compiling the policy to Miniscript descriptor.

tr(dummy_internal, script path 1, script path 2)
Merging two Tapleaf descriptors into a single tr() descriptor. Dummy key 020202020202....

Daniela creating Bitcoin wallet using BDK API. Paste the descriptor in, checking the checksum. Then it asks for descriptor for change. Just putting None as it is a demo.
Memory database being used. Gets cleaned everytime you finish running the program. Doesn't contain any secrets. Many databases in BDK. SQLite etc would be used for private keys.
We ask BDK for a new address.

createpsbt, print a struct. Sending coins to this address from a faucet.

Going to use Esplora to give us our balance. But you can use Core node, Electrum, compact filters in BDK.
Immature is coinbase, trusted pending (not going to zero conf yourself). Waiting for confirmation on transaction. We need to tell the wallet which policy to use. Some policies contain timelocks. BDK needs to know if you are using timelocks beforehand.
We ask BDK for all the external policies. A huge object with various policies, difficult to read. We have a huge policy encapsulating all the polices within it. We are going to use the thresh policy rather than the recovery script path with the CSV timelock.
Policy 0 is Taproot internal (dummy) key, Policy 1 is the threshold, Policy 2 is recovery script path.

BDK looks at descriptor and prints out all the policies you might spend from.
Going to be polite and send funds back to the faucet we used in the transaction we are constructing. Calling tx_builder_finish. Generating PSBT in base64, putting that in the Telegram group for later. Needs signatures to be finalized.
Now we need to sign the transaction. Descriptor and PSBT shared in Telegram. Instead of Peek(1) in BDK code put New and run it. Peek(1) would return address at index 1.
Now need to tell BDK to sign with a particular private key. Prints private key as a sanity check. Then create signer wrapper with a signer context of Taproot. Tell BDK that this isn't a Taproot internal key.
Running the code to generate a signature. One person has an error generated. They lost the private key for their public key. We only need half the participants to sign in this threshold so we're still ok.
External keychain for change and internal keychain for non-change.

Copy and pasting the PSBT blob into the code (after from_str). Everyone individually signing the PSBT and then sharing those signatures in the Telegram group.

println!("{}", psbt);
We need 3 signatures out of the 6 participants. We have another problem with someone copy and pasting the file and opening in text editor rather than copy and pasting the PSBT string directly but it looks like we have 3 distinct signatures.
Need to merge all the signatures together into one PSBT.

combine(psbt) didn't explode. Now need to finalize (base_psbt.finalize).

let finalized_tx = psbt.extract_tx();
We have a valid transaction! Let's broadcast it. Using Esplora to broadcast.

let blockchain = EsploraBlockchain::new

Transaction is on Blockstream.info and the funds are sent back to the testnet faucet with 3 valid signatures ๐ŸŽ‰
Mentions
See All
Peter Todd @PeterTodd ยท Sep 24, 2022
  • Post
  • From Twitter
Good writeup: