22:04 UTC

< February 2017 > Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

- Console
- #amber
- #apicula
- #arm-graphics
- #arm-netbook
- #bitcoin-wizards
- #buildbot
- #bundler
- #cinch
- #coiniumserv
- #coiniumserv-dev
- #crystal-lang
- #cubieboard
- #datamapper
- #discferret
- #elliottcable
- #forth
- #glasgow
- #gridcoin
- #gridcoin-dev
- #homecmos
- #huawei-g300
- #imx6-dev
- #imx6-dongle
- #ipfs
- #jruby
- #libreelec
- #libreoffice-ru
- #lima
- #linux-amlogic
- #linux-exynos
- #linux-rockchip
- #linux-sunxi
- #lisp
- #litex
- #logarion
- #lowempire
- #maemo-leste
- #maglev-ruby
- #microrb
- #milkymist
- #mirage
- ##moved_to_libera
- #mutant
- #nanoc
- #neo900
- #nextbsd
- #nmigen
- #ocaml
- #opal
- ##openfpga
- #openwrt-devel
- #panfrost
- ##panfrost-offtopic
- #Paws
- #Paws.Nucleus
- #picolisp
- #ponylang
- #prjmistral
- #pypy
- #qaul.net
- #qi-hardware
- #racket
- #radxa
- #reasonml
- #river
- #rom-rb
- #rubinius
- #ruby
- #ruby-core
- #rubygems
- #rubygems-aws
- #rubygems-trust
- #ruby-lang
- #ruby-rdf
- #sandstorm
- #scopehal
- #skywater-pdk
- #slime
- #soletta
- #stellar
- #stellar-dev
- ##stm32-rs
- #symbiflow
- #systemtap
- #teamhacksung
- #teamhacksung-support
- #tinyqma
- #videocore
- #wallaroo
- #xiki
- #xtompp
- ##yamahasynths
- #yosys
- #zig

sipa changed the topic of #bitcoin-wizards to: This channel is for discussing theoretical ideas with regard to cryptocurrencies, not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja

<bsm117532>
Heh, I corrected that error (Satoshi's Poisson vs. the correct Incomplete Beta Function) in my braids draft, but didn't think it important enough for its own paper... Speaking of which, I should finish that work...

<bsm117532>
Actually I made a little analysis which allows one to correctly combine work from miners using different targets. (You can't just add targets in that case) But I no longer think allowing miners to mine at different targets is terribly useful...

<bsm117532>
Because even with a DAG instead of a chain, there's still a lower limit on the allowed target due to latency effects, so there's only a narrow range of allowable difficulty targets anyway.

<kanzure>
low difficulty blocks should get auto-aggregated into some sort of pool proportional p2sh reward magic

<bsm117532>
kanzure: distributed aggregation is only possible of the rate of aggregation is similar to the network size (ping time). You can do centralized aggregation faster though...

<bsm117532>
I'm skeptical...64 bits per txn seems like it will be possible to brute-force some kind of collision. We'll see...

<Eliel>
bsm117532: one of his tweets about it described it something like: it looks like a hub and spokes like system but something different happens on the background.

<bsm117532>
How can a proof-of-proof-of-work commitment handle retargets? (e.g. for each 1 in the binary representation of the height, include the smallest block hash in that range)

<bsm117532>
e.g. at height 1025, tell me the smallest block hash in the first 1024 blocks, and the hash of the 1025th block.

<bsm117532>
I could do it with more commitments, but your security falls back to trusting consensus, instead of a direct proof of work.

<uiuc-slack>
<amiller> i haven't thought through this carefully yet, but i think it is no problem, ideally the popow should be based on absolute difficulty rather than relative to the target

<uiuc-slack>
<amiller> it's clearly plausible but the challenge is to make sure that someone with very low difficulty blocks can't make a valid huge proof

<bsm117532>
Maybe this isn't an issue. I think you would need to enforce that the "highest work chain" is the one represented by the smallest hashes in this way, rather than the one represented by a direct sum of all blocks.

<bsm117532>
Going back to the above discussion of the incomplete beta function, it's straightforward to formulate a likelihood function that one chain contains more work than the other.

<bsm117532>
...by examining only log(height) hashes and using the (corrected relative to Satoshi) incomplete beta function formulation.

<bsm117532>
Making low difficulty blocks wouldn't help you, because the likelihood function built from the incomplete beta would rank the longer chain as lower work (unless it accidentally got lucky with a very small hash -- but we know the probability of that).

<andytoshi>
bsm117532: amiller: in my mimblewimble paper i just had a merkle-sum-tree of blockheaders and their difficulty

<andytoshi>
this is sufficient to prove the difficulty of any block in relation to that of any other block, to prove the cumulative difficulty before and after that block, etc

<bsm117532>
andytoshi: I can vaguely figure out what that means, but it could stand to have more exposition.

<bsm117532>
I think they do. But this is a different security model than I described above, depending on consensus to build the tree correctly. Given a truncated tree ("merkle path") the truncation points cannot be independently verified to represent the correct amount of work, unless you have the entire tree.

<bsm117532>
I tried for a while to find a way to make a data structure that had that property (using some magic known as "incremental hash functions") but I don't see a way to make it work.

<andytoshi>
yeah, the tree has every blockheader and it sums the difficulties in each header, so the root has the total cumulative difficulty of the chain

<tromp_>
with cumulative difficulty in the headers themselves, you don't need an additional field in the merkle tree of blocks

<andytoshi>
yep, although every internal node in the tree needs to commit to the sum of the difficulties of its children

<andytoshi>
yes but to verifiy cumulative difficulty without downloading every leaf, you need to have commitments in the nodes

<bsm117532>
tromp_: What andytoshi describes is actually an incremental hash function. What you do is store the Merkle path to the most recent addition to the tree. You can dump the entire rest of the tree. From that last Merkle path, you can update the root by one block.

<andytoshi>
tromp_: if you have cum difficulty commitments you can use them to get a proof-of-proof-of-work even with varying difficulty

<andytoshi>
if somebody forges a leaf's difficulty (as the sumtree will force them to do if they're forging a node's difficulty) this is the same as not mining a block and forging a normal popow

<bsm117532>
Though, Bitcoin's PoW is probabilistic too -- you don't know how much hashpower was ***actually*** used to generate a set of block headers.

<andytoshi>
https://download.wpsoftware.net/bitcoin/wizardry/mimblewimble.pdf section 3, theorem 7 says what i'm saying

<andytoshi>
tromp_: if i give you a merkle path in that how can you know anything at all about other blocks' difficulty?

<andytoshi>
...so the powopow requires you give me the headers of every "skipped" block, so there is no skipping at all?

<bsm117532>
no...popow requires only the smallest block hash within a range. Using the correct statistics, this gives you a probability distribution for the hashrate used to generate it.

<andytoshi>
bsm117532: how can anyone tell what the smallest block hash is without downloading the whole range (or using a sumtree)?

<bsm117532>
It would have to be committed to in the block header, so a client would download the popow with the most recent block header.

<andytoshi>
i'm not sure what you mean, M is the "real difficulty" of an individial block (vs the target difficulty which is the binary invalid/valid threshold)

<bsm117532>
andytoshi: Actually you don't need to sum work. For Poisson processes like this, it's statistically equivalent to just keep the smaller of the block hashes instead of computing their summed difficulty. In that way you could just propagate the smallest hash up the tree. This combines these two ideas...

<andytoshi>
ah i gotch bsm117532 ... you can consider this as a "sum" where the sum function is min?

<bsm117532>
This preserves the ability of a client to directly verify that the root hash actually represents a certain amount of work...

<gmaxwell>
you can't compute exact work from that min and get the same result as a comparison the ordinary way. if you use lowest you allow the 'high variance attacks'

<gmaxwell>
E.g. where you have a low but finite and non-trivial probablity you replace the whole chain with your one asic. :P

<bsm117532>
This idea can also be improved by presenting the smallest N hashes in a given subset, where the variance goes down as N goes up.

<gmaxwell>
bsm117532: but I extend an ___existing___ chain, so all but one of the N is not mine. that procedure also has the problem that adding an additional block to a chain likely doesn't increase it's work.

<bsm117532>
Variance is the right way to think about this gmaxwell. It's a pretty straightforward analysis to figure out how many hashes need to be presented.

<gmaxwell>
the skiplist proofs do not have the problem that linear verifiers compute a different work from proof of proof readers.

<bsm117532>
gmaxwell: But when you're adding one, you're comparing forks with equivalent history up to some point. It's the work done after the forking point that must be compared, not variance on the larger, historical blocks. (e.g. you can allow more variance on the larger set of old blocks)

<gmaxwell>
Peer X offers you chain A, Peer Y offers you chain B -- each offering a proof of how much work it takes. Which chain do you take? You must reach the same conclusion as nodes who verified blocks one at a time or you lose the game.

<bsm117532>
I guess what I'm meandering towards is that I think there exists another definition of "highest work" which can be verified by presenting log(height) hashes, and cannot be contradicted by the full history. (Or, full-history nodes would accept this other definition and not the work-sum Satoshi used)

<gmaxwell>
then you're needlessly creating a vulnerablity where an attacker with a couple lucky blocks can replace the whole chain, with much higher probablity of success than the current system. Already the current system has P=1 of attacker success in finite but astronomically large time if the attacker maintains any constant proportion of hashpower and hashpower increases exponentially with any exponent

<gmaxwell>
I would guess that a constant number of samples removes the 'exponentially with any exponent' from that result.

<bsm117532>
I can compute how many "lucky blocks" he would need though, and incorporate that into the definition to kill this attack.

<bsm117532>
Another way to put it: given the oldest 2^n blocks, the subset of those blocks with the ***largest*** hashes can be truncated -- they don't prevent an attacker at all.

<andytoshi>
awesome tromp_ ... with mimblewimble i had the additional constraint that i needed a "canonical" powopow because i wanted people to be able to drop blocks entirely, but i think what i came up with there is generally useful

<andytoshi>
no, i started to, but ran out of time working out that section of the paper and then i had to give a talk about it :(

<tromp_>
btw, is there much advantage to commiting to a merkle sum tree, rather than just the previous compact block?

<tromp_>
since the compact chain is of only log length, can't every clinet be expected to have the whole chain in memory?

<andytoshi>
yes, but i think for new verifiers to know that they got the canonical compact chain, you need the merkle sum-tree construction

<tromp_>
it seems everyone can quickly query for all block on compact chain if they dont already have them, and then they have no more need for the merkle sum tree

<andytoshi>
the question is, without the tree, given only the compact chain, can you verify that this is the real chain (no) and does this affect the security model (i don't know)

<bsm117532>
Hmmm...one generalization beyond using min() instead of sum() to reduce variance is to use an Order Statistic. But that's the "kth smallest value" and requires you to have all k values... :-(

<tromp_>
i was thinking of how, with an asymmetric pow, you still add some witness data to the header after having mined a block

<andytoshi>
i'm not understanding you, are you talking in the context of an asymmetric pow? how does this change the fact that you don't know the "real difficulty" of a block until it's mined?

<tromp_>
if you add it to the header, then blocks need to commit to predecessor by hashing both its header and this extra field

<tromp_>
or maybe it's clearer to refer to the full header (including the prev compact link) and the truncated header that is used for hashcash

<tromp_>
so if you're building block i+1, then you compute the compact chain successor of block i, let's call it block j, and you commit to block j+1