01:33 UTC

< August 2016 > Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

- Console
- #amber
- #apicula
- #arm-graphics
- #arm-netbook
- #bitcoin-wizards
- #buildbot
- #bundler
- #cinch
- #coiniumserv
- #coiniumserv-dev
- #crystal-lang
- #cubieboard
- #datamapper
- #discferret
- #elliottcable
- #forth
- #glasgow
- #gridcoin
- #gridcoin-dev
- #homecmos
- #huawei-g300
- #imx6-dev
- #imx6-dongle
- #ipfs
- #jruby
- #libreelec
- #libreoffice-ru
- #lima
- #linux-amlogic
- #linux-exynos
- #linux-rockchip
- #linux-sunxi
- #lisp
- #litex
- #logarion
- #lowempire
- #maemo-leste
- #maglev-ruby
- #microrb
- #milkymist
- #mirage
- ##moved_to_libera
- #mutant
- #nanoc
- #neo900
- #nextbsd
- #nmigen
- #ocaml
- #opal
- ##openfpga
- #openwrt-devel
- #panfrost
- ##panfrost-offtopic
- #Paws
- #Paws.Nucleus
- #picolisp
- #ponylang
- #prjmistral
- #pypy
- #qaul.net
- #qi-hardware
- #racket
- #radxa
- #reasonml
- #river
- #rom-rb
- #rubinius
- #ruby
- #ruby-core
- #rubygems
- #rubygems-aws
- #rubygems-trust
- #ruby-lang
- #ruby-rdf
- #sandstorm
- #scopehal
- #skywater-pdk
- #slime
- #soletta
- #stellar
- #stellar-dev
- ##stm32-rs
- #symbiflow
- #systemtap
- #teamhacksung
- #teamhacksung-support
- #tinyqma
- #videocore
- #wallaroo
- #xiki
- #xtompp
- ##yamahasynths
- #yosys
- #zig

sipa changed the topic of #bitcoin-wizards to: This channel is for discussing theoretical ideas with regard to cryptocurrencies, not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja

<andytoshi>
amiller: i've been thinking a bit about compact SPV over the last couple weeks and i think i finally have a simple understanding of what you were saying back when we were writing the sidechains whitepaper

<andytoshi>
if i hand you a chain with a million blocks in it, the time it took to generate that (assuming constant hashrate etc) will follow the distribution \sum_1^{1000000} Exp(1) = Erlang(1000000, 1)

<andytoshi>
compare this to the extreme case of a "compact SPV" chain where it's just one block with a million times the PoW on it. this has generation time distribution Exp(1000000)

<andytoshi>
the erlang one has variance proportional to a million; the exponential proportional to a million squared ... with the result that you've gotta spend ~ 1000000 ± 2000 blocktimes to have a nonnegligible chance of producing the whole chain. basically it really is a proof that a million blocktimes' worth of work was done

<andytoshi>
contrast the exponential distribution which is so flat that there's almost the same probability that it took 1 blocktime to generate this super-work block as there is that it took a million. (more precisely what i mean is that the CDF climbs from 0 to nearly one smoothly as the time taken goes from 0 to a couple million)

<andytoshi>
so even though both these "chains" (a million 1-diff blocks) and (one million-diff blocks) have same expected work, the resulting chains prove drastically different things

<andytoshi>
an actual compact SPV chain with log-many blocks is going to be weirdly weighted, you expect it'll have one block with a ton of work, one with half that, one with half that, and so on .. but we can imagine for a minute that actually the blocks are all equal-difficulty, there are just log-many of them, and this will give some intuition as to the tradeoff between blocksize length and variance. (this is

<andytoshi>
a really bad approximation, but you can sum exponential distributions of equal characteristic time to get erlang distributions .... as soon as you start having variable targets you have to do things numerically and it's a mess)

<andytoshi>
the variance we get doing this is n_blocks * block_weight^2 .... lower variance is good, the lower the variance the closer the expected time is to how much work is actually "proven". we can see the two extremes i described, variance a million (a million diff-one blocks) and variance million^2 (one diff-million block)

<andytoshi>
so we have N for N blocks, N^2 for 1 block ... for log(N) blocks of difficulty N/log(N) (same total expected work) our equation becomes log(N)*(N/log(N))^2 = N^2/log(N)

<andytoshi>
you can replace log with any function in the above. you see if you try to play difficulty games to trim the blockchain length down from N to f(N), the variance of your time distribution goes from N to N^2/f(N)

<andytoshi>
so here's a stats/real analysis question .. how is P(chain generated in less than expected time * "negligible") affected by the shape of f?

<amiller>
chernoff bounds let you say that there's negligible probablity that a random variable (in this case the amount of work done) differs from its expected value (the expected work based on just adding up the difficulty) by more than some fraction

<amiller>
it's negligible like meaning the probability is exp(-k), where k is the number of samples that make up the variable

<andytoshi>
:/ i don't have our conversation logs from the time we were writing the paper, but i think i remember you mentioning chernoff's bound and saying "this is pretty inherent, you can't drop blocks and prove a statement of the same strength"

<kanzure>
"The discrete logarithm problem over prime fields can be transformed to a linear multivariable Chinese remainder theorem" https://arxiv.org/abs/1608.07032

<kanzure>
"We show that the classical discrete logarithm problem over prime fields can be reduced to that of solving a system of linear modular equations."

<Taek>
amiller, andytoshi, I think I'm missing information. So you have a bunch of SPV blocks that demonstrate X much work was done, but how to you prove that it was all done on a linked-list of blocks?

<Taek>
What's to stop me from grabbing blocks from the real chain to show that my attack chain has a bunch of work?

<andytoshi>
Taek: sorry, there is missing context. suppose every block commits to every previous block in a merkle tree, rather than having just a prevblockhash link

<andytoshi>
then i can give you a chain from tip to genesis where i've dropped a bunch of blocks, and you can use the extra backlinks to see that it follows all the way through

<andytoshi>
there is a data structure called a "skiplist" that has a paper about it, even though pretty-much all the intuition is in its name :)

<amiller>
there's still more context needed, it's pretty non-trivial how to compare two proof of work samples in a way that guarantees forks don't cause a problem, this isn't addressed in the sidechains paper

<amiller>
i think that the algorithm in this paper does the trick but that it wasn't fleshed out until here http://fc16.ifca.ai/bitcoin/papers/KLS16.pdf

<andytoshi>
yeah, the sidechains paper was using them basically as a witness where uniqueness was not important

<andytoshi>
if you were actually trying to do pruning, i think algorithm in the paper can run into trouble ... imagine i give Taek blocks 1, 2, 4 (with a skiplink from 4 to 2), then you make block 5 which is supposed to commit to 1, 2, 3, 4

<andytoshi>
i'll read this paper, thank you, maybe it solves it. (i think i solved it myself but in an adhoc way)

<andytoshi>
amiller: oh, you meant something different from what i was thinking by "forks don't cause a problem", i have been thinking about actually pruning blockchains with compact SPV and have been worried about the compact SPV itself causing forks where people don't agree on whether new blocks extend the chain

<andytoshi>
well, what i'm thinking is, imagine you have a 'canonical compact chain', where every block is required to skip (a) to the previous block; (b) as far back as it can in the already-pruned chain. (you don't know (b) til you've mined it so you've gotta commit to a lot of blocks, but only those two will matter in the end)

<andytoshi>
then you explicitly commit to the block indices in the already-pruned chain, so there's consensus on that

<andytoshi>
then validators need to check only two merkle proofs, and the same ones for every verifier. if they've got the compact chain up to the point of the block they're validating, they know what indices need to be committed to

<andytoshi>
there is the problem that compact verifiers cannot check the link to the previous block, but this is a much smaller consensus risk

<andytoshi>
so i'm thinking of like, a mimblewimble chain where everyone has the last k blocks (for k = 10000 or some security parameter) and just a compact spv proof from that back to the genesis

<andytoshi>
the compact spv proof could be missing backlinks, and this would mean the chain wouldn't have been verified by anyone doing full verification back then, but this is entirely irrelevant to the chain's operation ___now___

<kanzure>
andytoshi: hm? why not store and commit to to old spv proofs? storage by unrelated third parties, store hash in chain, later procure it as evidence that things seemed to verify in the past, ya?

<kanzure>
maaku: well, it means we need to brush up on piano and outwit djb with our own musical team

<andytoshi>
i can tell you they don't understand the ramifications themselves, they say "multivariate CRT is hard lol" but typing "multivariate chinese remainder theorem" into google gets you http://www.math.harvard.edu/~knill/preprints/linear.pdf which points out a way to solve it in many circumstances

<andytoshi>
maaku: like, they end with this example "compute the DL of 4 with respect to 2 mod 11" (btw this is 2, 2^2 = 4 :)). they go through their mechanations to get "beta + n = 6 mod 11", "beta + 2n = 3 mod 5". by looking for solutions on the curve (beta, n) = (2t, t) this is {3t = 6 mod 11, 4t = 3 mod 5} or {t = 8 mod 11, t = 2 mod 5}

<andytoshi>
run this through the regular CRT which is super fast (ext. euclid's algo on 11 and 5) and out pops t = 68. and sure enough, 68 mod 11 = 2, which is the answer

<andytoshi>
the eqn for b1 on the bottom of page 3 is wrong. if you use it you get a different value in their example (and you fail to solve the discrete log)

<andytoshi>
honestly i think the proof of lemma 2 is wrong, they handwaved a step and mixed up n*phi(q) - 1 and (n - 1)*phi(q) .... then when they were doing their "numerical example" they just fudged the numbers to make it work