16:57 UTC

< February 2017 > Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

- Console
- #amber
- #apicula
- #arm-graphics
- #arm-netbook
- #bitcoin-wizards
- #buildbot
- #bundler
- #cinch
- #coiniumserv
- #coiniumserv-dev
- #crystal-lang
- #cubieboard
- #datamapper
- #discferret
- #elliottcable
- #forth
- #glasgow
- #gridcoin
- #gridcoin-dev
- #homecmos
- #huawei-g300
- #imx6-dev
- #imx6-dongle
- #ipfs
- #jruby
- #libreelec
- #libreoffice-ru
- #lima
- #linux-amlogic
- #linux-exynos
- #linux-rockchip
- #linux-sunxi
- #lisp
- #litex
- #logarion
- #lowempire
- #maemo-leste
- #maglev-ruby
- #microrb
- #milkymist
- #mirage
- ##moved_to_libera
- #mutant
- #nanoc
- #neo900
- #nextbsd
- #nmigen
- #ocaml
- #opal
- ##openfpga
- #openwrt-devel
- #panfrost
- ##panfrost-offtopic
- #Paws
- #Paws.Nucleus
- #picolisp
- #ponylang
- #prjmistral
- #pypy
- #qaul.net
- #qi-hardware
- #racket
- #radxa
- #reasonml
- #river
- #rom-rb
- #rubinius
- #ruby
- #ruby-core
- #rubygems
- #rubygems-aws
- #rubygems-trust
- #ruby-lang
- #ruby-rdf
- #sandstorm
- #scopehal
- #skywater-pdk
- #slime
- #soletta
- #stellar
- #stellar-dev
- ##stm32-rs
- #symbiflow
- #systemtap
- #teamhacksung
- #teamhacksung-support
- #tinyqma
- #videocore
- #wallaroo
- #xiki
- #xtompp
- ##yamahasynths
- #yosys
- #zig

sipa changed the topic of #bitcoin-wizards to: This channel is for discussing theoretical ideas with regard to cryptocurrencies, not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja

<dgenr8>
The third equation on page 6 differs from the Nakamoto formula, yet claims to restate it. It is off by one in the upper summation index.

<tromp_>
downside of replacing difficulty by cumulative version is that you need full 256 bit precision

<tromp_>
but that is small price to pay. and gives less worries about frequent more fine grained difficulty adjustments

<andytoshi>
regarding your successors-commit-to-successors scheme, does this mean basically each block in the compact chain has two headers?

<andytoshi>
does this create an attack wherein i take any high-work block from any part of any chain and build on that?

<tromp_>
the first in each pair has the lucky pow, and the second the commitment to both the first and the previous pair

<andytoshi>
here you're just taking arbitrary work that you didn't have to do yourself (you could have found it anywhere) and then using that as a ticket to skip

<andytoshi>
here you have little work which commits (a) to the compact chain, and (b) to a bunch of work

<andytoshi>
that's fine, but unless (b) commits to the compact chain, its work doesn't prove anything about the compact chain

<andytoshi>
heh, yeah, it took me way longer than i'd expected to convince myself that the merkle tree thing worked the way that i'd handwaved it to

<tromp_>
you're saying you could take a compact chain (b) and fake all the successors forming (a), which does indeed seem to be a problem:(

<tromp_>
btw, your effective difficulties are just sums of difficulties of partitioning block ranges, right?

<bsm117532>
tromp if I understand correctly, you want to commit to the luckiest block in history, and the most recent block as a pair?

<bsm117532>
If instead you commit to a log(height) sized list of blocks representing the luckiest blocks in each range, you've committed to the whole history and the PoW in a verifiable way. Truncating log(height) to 2 seems to introduce problems...

<bsm117532>
(or as discussed yesterday, some subset of the luckiest blocks in each range -- to reduce variance)

<bsm117532>
e.g. at height 1153 = 0b10010000001 you'd commit to the luckiest 3 blocks in the first 1024, the luckiest 2 blocks in the next 128, and the most recent block, where the numbers 3 and 2 need to be tuned kill attacks that take advantage of variance.

<tromp_>
these are all variations of the high value hash highway proposed years ago, but andrew's formulation is the cleanest so far

<bsm117532>
Andrew's formulation certainly works AFAICT. My complaint about it is that the PoW cannot be verified without the entire history, and I think there's a way to reformulate it into a log(height) sized list per block, which allows direct verification of the PoW -- thus allowing clients to drop history.

<bsm117532>
e.g. given a merging rule (d1+d2, hash(h1+h2)), there's no way to verify that the combined difficulties d1+d2 corresponds to hash(h1+h2) unless you have the leaves.

<tromp_>
we imagine using this for pows with large witnesses (100s of bytes instead of just a nonce), in which case a selfcontained powopow would be too big

<bsm117532>
the merging rule (min(d1,d2), hash corresponding to min(d1,d2)) does allow for direct verification.

<tromp_>
your disire to drop history sounded to me like you want to have self-contained powopow that dont need to query early blocks

<bsm117532>
that is, I could take lucky hashes from anywhere, construct a fake commitment, and feed it to light clients?

<bsm117532>
tromp_ fake commitments like that would only work if the light client has no history at all. If the light client queries for the latest block on a semi-regular basis, the oldest lucky blocks will remain the same.

<bsm117532>
Also, by including <n> lucky blocks from the oldest range to reduce variance, some of those lucky blocks will still be lucky when the next power of 2 rolls around in log_2(height).

<tromp_>
so a blockchain design might end up using two block merkle trees; one for all blocks, and one for dominating blocks (what i wld call blocks in compact chain)

<bsm117532>
Ah yes, the "hash highway" is a Merkle skip list...amiller showed me that a while ago...

<tromp_>
yes bsm117532 yes, but we shld prefer foolproof solutions over ones that are probably secure with enough extra chceking

<tromp_>
to witness the compact chain within a merkle tree on all blocks, you'd need a size log^2(n) proof of witness paths

<bsm117532>
Wait...let me define V(n) to be the number of lucky hashes presented at height 2^n to reduce variance. Assuming V(n) < V(n+1), ALL of the lucky hashes will already have been seen by a light client updating his wallet.

<tromp_>
btw, there is some small but non-trivial chance that at some point the compact chain consists of only 3 nodes; genesis, an extrememly lucky pow, and its successor witnessing the luck

<bsm117532>
In the event there's a new lucky hash since the last block header update, the light client could directly query for the log(n) list of lucky hashes at that height.

<tromp_>
i'm sorry bsm117532; i'm too happy with andrew's powow design to seriously study variance based alternatives:(

<tromp_>
btw, andytoshi, it seems that the compact chain witnesses at least half of the cumulative difficulty of the entire chain, correct?

<kanzure>
wasn't this written up in http://diyhpl.us/~bryan/papers2/bitcoin/mimblewimble-andytoshi-draft-2016-10-20.pdf

<tromp_>
i was trying to remember if the compact chain witnesses all of the work or just the majority. guess i need to read those theorems in detail to see which statement is more correct

<andytoshi>
tromp_: "expected #blocks to find ***first*** block of difficulty >= D is D" seems meaningless

<Jeremy_Rand[m]>
Has anyone proposed/investigated the idea of a merge-mined chain in which the child chain is re-orged if and only if the parent chain is re-orged? (I.e. the block headers of the parent chain are used to decide the most-work rule of the child chain.)

<andytoshi>
(easier to think about stopping on sixes; then if you roll five dice without sixes, each has 3/5 chance of being even and there you go)

<andytoshi>
i'd bet if you built the markov chain and played it forward you'd get the same result with a lot more work :P

<andytoshi>
if it's something nice like "twice as many" or "e times as many" then *shrug*, but if it squares them or something awful i'm going to be sad

<andytoshi>
oh, i was calculating "if you roll 5 dice with no sixes, what is the expected number of evens?" to which the answer is 3, but i guess that's not right

<tromp_>
anyway, seems Lemma 2 needs to have different outcome for expected #attempts to achieve some total eff. difficulty

<tromp_>
i think perhaps the best you can do is a O(log(n)) sized subchain that proves 1-eps of the work

<bsm117532>
"the claim is that rolling three evens should take as much time as rolling one six" -- that's not true. The former has a probability P=1/2 to roll an even, three rolls has probability P^3=1/8 to get all evens while one six has probability 1/6.