sipa changed the topic of #bitcoin-wizards to: This channel is for discussing theoretical ideas with regard to cryptocurrencies, not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja
Guyver2 has quit [Quit: :)]
tromp has quit [Remote host closed the connection]
<uiuc-slack>
<amiller> Off topic, go to #bloop-dev
<bsm117532>
Heh, I corrected that error (Satoshi's Poisson vs. the correct Incomplete Beta Function) in my braids draft, but didn't think it important enough for its own paper... Speaking of which, I should finish that work...
<bsm117532>
Actually I made a little analysis which allows one to correctly combine work from miners using different targets. (You can't just add targets in that case) But I no longer think allowing miners to mine at different targets is terribly useful...
<bsm117532>
Because even with a DAG instead of a chain, there's still a lower limit on the allowed target due to latency effects, so there's only a narrow range of allowable difficulty targets anyway.
<kanzure>
low difficulty blocks should get auto-aggregated into some sort of pool proportional p2sh reward magic
BashCo has joined #bitcoin-wizards
BashCo_ has quit [Ping timeout: 240 seconds]
jouke has joined #bitcoin-wizards
<kanzure>
where is the STARK paper? i was told it was on eprint.iacr.org but i don't see it.
<tromp_>
bsm117532: did you get a copy of Sergio's Lumino paper?
<bsm117532>
kanzure: distributed aggregation is only possible of the rate of aggregation is similar to the network size (ping time). You can do centralized aggregation faster though...
<bsm117532>
tromp_ yes
<tromp_>
i asked Sergio by email but got no response yet
<tromp_>
anything you can say about it? his claims seem too good to be true?!
<tromp_>
is there a catch?
<bsm117532>
I haven't had a chance to read it yet.
<fluffypony>
kanzure: there was a pre-print, but the paper isn't out yet
EvilHero_ has quit [Remote host closed the connection]
<bsm117532>
I'm skeptical...64 bits per txn seems like it will be possible to brute-force some kind of collision. We'll see...
EvilHero_ has joined #bitcoin-wizards
<kanzure>
where's the preprint?
catcow has joined #bitcoin-wizards
adams__ has joined #bitcoin-wizards
abpa has joined #bitcoin-wizards
<Eliel>
bsm117532: one of his tweets about it described it something like: it looks like a hub and spokes like system but something different happens on the background.
CheckDavid has quit [Quit: Connection closed for inactivity]
<bsm117532>
How can a proof-of-proof-of-work commitment handle retargets? (e.g. for each 1 in the binary representation of the height, include the smallest block hash in that range)
<bsm117532>
e.g. at height 1025, tell me the smallest block hash in the first 1024 blocks, and the hash of the 1025th block.
laurentmt has quit [Quit: laurentmt]
<bsm117532>
Maybe amiller is around?
<uiuc-slack>
<amiller> hm
<bsm117532>
I could do it with more commitments, but your security falls back to trusting consensus, instead of a direct proof of work.
<uiuc-slack>
<amiller> i haven't thought through this carefully yet, but i think it is no problem, ideally the popow should be based on absolute difficulty rather than relative to the target
<uiuc-slack>
<amiller> it's clearly plausible but the challenge is to make sure that someone with very low difficulty blocks can't make a valid huge proof
<bsm117532>
Maybe this isn't an issue. I think you would need to enforce that the "highest work chain" is the one represented by the smallest hashes in this way, rather than the one represented by a direct sum of all blocks.
<bsm117532>
Going back to the above discussion of the incomplete beta function, it's straightforward to formulate a likelihood function that one chain contains more work than the other.
<bsm117532>
...by examining only log(height) hashes and using the (corrected relative to Satoshi) incomplete beta function formulation.
<bsm117532>
Such a client wouldn't actually care about retargets, only highest work.
<bsm117532>
Making low difficulty blocks wouldn't help you, because the likelihood function built from the incomplete beta would rank the longer chain as lower work (unless it accidentally got lucky with a very small hash -- but we know the probability of that).
BashCo_ has quit [Remote host closed the connection]
CrazyLoaf has joined #bitcoin-wizards
todaystomorrow has joined #bitcoin-wizards
jl2012 has quit []
jl2012_ has joined #bitcoin-wizards
jl2012_ is now known as jl2012
jl2012 has quit [Client Quit]
jl2012 has joined #bitcoin-wizards
BashCo has joined #bitcoin-wizards
jannes has quit [Quit: Leaving]
paveljanik has joined #bitcoin-wizards
paveljanik has joined #bitcoin-wizards
paveljanik has quit [Changing host]
<andytoshi>
bsm117532: amiller: in my mimblewimble paper i just had a merkle-sum-tree of blockheaders and their difficulty
<andytoshi>
this is sufficient to prove the difficulty of any block in relation to that of any other block, to prove the cumulative difficulty before and after that block, etc
BashCo_ has joined #bitcoin-wizards
BashCo has quit [Ping timeout: 264 seconds]
<bsm117532>
andytoshi: I can vaguely figure out what that means, but it could stand to have more exposition.
<bsm117532>
So leaf nodes are ordered pairs (difficulty, txn Merkle root) no?
BashCo has joined #bitcoin-wizards
<tromp_>
they cld also store cumulative difficulty instead of difficulty
BashCo_ has quit [Ping timeout: 240 seconds]
<bsm117532>
I think they do. But this is a different security model than I described above, depending on consensus to build the tree correctly. Given a truncated tree ("merkle path") the truncation points cannot be independently verified to represent the correct amount of work, unless you have the entire tree.
<bsm117532>
I tried for a while to find a way to make a data structure that had that property (using some magic known as "incremental hash functions") but I don't see a way to make it work.
<andytoshi>
yeah, the tree has every blockheader and it sums the difficulties in each header, so the root has the total cumulative difficulty of the chain
<tromp_>
with cumulative difficulty in the headers themselves, you don't need an additional field in the merkle tree of blocks
<andytoshi>
yep, although every internal node in the tree needs to commit to the sum of the difficulties of its children
<tromp_>
they already commit to what is in the leaves
<tromp_>
including cumulative diificulties
<andytoshi>
yes but to verifiy cumulative difficulty without downloading every leaf, you need to have commitments in the nodes
<andytoshi>
this is the point of a merkle sum tree
<bsm117532>
tromp_: What andytoshi describes is actually an incremental hash function. What you do is store the Merkle path to the most recent addition to the tree. You can dump the entire rest of the tree. From that last Merkle path, you can update the root by one block.
<tromp_>
complete verification requires checking every single block header
<bsm117532>
So it's log(n) storage for all nodes.
<bsm117532>
tromp_ Yes O(height) to re-verify.
<andytoshi>
tromp_: if you have cum difficulty commitments you can use them to get a proof-of-proof-of-work even with varying difficulty
<andytoshi>
if somebody forges a leaf's difficulty (as the sumtree will force them to do if they're forging a node's difficulty) this is the same as not mining a block and forging a normal popow
<tromp_>
is the powopow probabalistic?
<bsm117532>
tromp_: yes
<andytoshi>
yes
blackwraith has joined #bitcoin-wizards
<bsm117532>
Though, Bitcoin's PoW is probabilistic too -- you don't know how much hashpower was *actually* used to generate a set of block headers.
<tromp_>
so i'm saying you can do that with a plain merkle tree on top of blocks with cumdiff
<andytoshi>
i call the powopow a "compact chain"
<andytoshi>
tromp_: if i give you a merkle path in that how can you know anything at all about other blocks' difficulty?
blackwraith has quit [Read error: Connection reset by peer]
<tromp_>
any two merkle paths witness the cum diff in the corresponding range of blocks
<andytoshi>
...so the powopow requires you give me the headers of every "skipped" block, so there is no skipping at all?
<bsm117532>
no...popow requires only the smallest block hash within a range. Using the correct statistics, this gives you a probability distribution for the hashrate used to generate it.
<tromp_>
let me read your thm 7 ...
<andytoshi>
bsm117532: how can anyone tell what the smallest block hash is without downloading the whole range (or using a sumtree)?
blackwraith has joined #bitcoin-wizards
<bsm117532>
It would have to be committed to in the block header, so a client would download the popow with the most recent block header.
<bsm117532>
Aaaand my argument about security models goes out the window.
<andytoshi>
tromp_: section 3.3.1 i think is self-contained and has everything i'm trying to say
<tromp_>
yes; i'm digesting that right now:)
<tromp_>
is M around 2^80 for bitcoin now?
<tromp_>
or 2^70?
<andytoshi>
i'm not sure what you mean, M is the "real difficulty" of an individial block (vs the target difficulty which is the binary invalid/valid threshold)
<tromp_>
ok. there's also the complicating implicit factor of 2^32 in bitcoin
<tromp_>
but you're ignoring that
<andytoshi>
correct
<tromp_>
so each hash consumes all prior lesser hashes that combined are about as impressive
<bsm117532>
andytoshi: Actually you don't need to sum work. For Poisson processes like this, it's statistically equivalent to just keep the smaller of the block hashes instead of computing their summed difficulty. In that way you could just propagate the smallest hash up the tree. This combines these two ideas...
<andytoshi>
ah i gotch bsm117532 ... you can consider this as a "sum" where the sum function is min?
<bsm117532>
correct
<bsm117532>
This preserves the ability of a client to directly verify that the root hash actually represents a certain amount of work...
<gmaxwell>
you can't compute exact work from that min and get the same result as a comparison the ordinary way. if you use lowest you allow the 'high variance attacks'
<gmaxwell>
E.g. where you have a low but finite and non-trivial probablity you replace the whole chain with your one asic. :P
<bsm117532>
high variance = high work, and we know exactly the probability of that.
<bsm117532>
This idea can also be improved by presenting the smallest N hashes in a given subset, where the variance goes down as N goes up.
Joseph__ has quit [Ping timeout: 255 seconds]
<bsm117532>
You can get lucky once, but getting lucky N times is exponentially suppressed.
<gmaxwell>
bsm117532: but I extend an _existing_ chain, so all but one of the N is not mine. that procedure also has the problem that adding an additional block to a chain likely doesn't increase it's work.
<bsm117532>
Variance is the right way to think about this gmaxwell. It's a pretty straightforward analysis to figure out how many hashes need to be presented.
<gmaxwell>
the skiplist proofs do not have the problem that linear verifiers compute a different work from proof of proof readers.
<bsm117532>
gmaxwell: But when you're adding one, you're comparing forks with equivalent history up to some point. It's the work done after the forking point that must be compared, not variance on the larger, historical blocks. (e.g. you can allow more variance on the larger set of old blocks)
<gmaxwell>
Peer X offers you chain A, Peer Y offers you chain B -- each offering a proof of how much work it takes. Which chain do you take? You must reach the same conclusion as nodes who verified blocks one at a time or you lose the game.
Sosumi has joined #bitcoin-wizards
Joseph__ has joined #bitcoin-wizards
<bsm117532>
I guess what I'm meandering towards is that I think there exists another definition of "highest work" which can be verified by presenting log(height) hashes, and cannot be contradicted by the full history. (Or, full-history nodes would accept this other definition and not the work-sum Satoshi used)
<gmaxwell>
then you're needlessly creating a vulnerablity where an attacker with a couple lucky blocks can replace the whole chain, with much higher probablity of success than the current system. Already the current system has P=1 of attacker success in finite but astronomically large time if the attacker maintains any constant proportion of hashpower and hashpower increases exponentially with any exponent
<gmaxwell>
.
<gmaxwell>
I would guess that a constant number of samples removes the 'exponentially with any exponent' from that result.
<bsm117532>
I can compute how many "lucky blocks" he would need though, and incorporate that into the definition to kill this attack.
<tromp_>
ok andytoshi, i fully see that the value of the compact block commitments
<tromp_>
eh, remove "that"
<bsm117532>
Another way to put it: given the oldest 2^n blocks, the subset of those blocks with the *largest* hashes can be truncated -- they don't prevent an attacker at all.
<andytoshi>
awesome tromp_ ... with mimblewimble i had the additional constraint that i needed a "canonical" powopow because i wanted people to be able to drop blocks entirely, but i think what i came up with there is generally useful
uiuc-slack has quit [Read error: Connection reset by peer]
uiuc-slack has joined #bitcoin-wizards
<tromp_>
did you ever compute the compact chain of bitcoin:-?
<andytoshi>
no, i started to, but ran out of time working out that section of the paper and then i had to give a talk about it :(
<andytoshi>
..i'm very curious, though i then totally forgot to follow through on it..
<tromp_>
btw, is there much advantage to commiting to a merkle sum tree, rather than just the previous compact block?
<andytoshi>
you don't know what the previous block is until you're done mining :)
<andytoshi>
if not for that, no
<tromp_>
since the compact chain is of only log length, can't every clinet be expected to have the whole chain in memory?
<andytoshi>
yes, but i think for new verifiers to know that they got the canonical compact chain, you need the merkle sum-tree construction
rusty2 has joined #bitcoin-wizards
<andytoshi>
maybe if you aren't doing pow-o-pow in consensus code you don't need this, i'm unsure
Ylbam has quit [Quit: Connection closed for inactivity]
<tromp_>
it seems everyone can quickly query for all block on compact chain if they dont already have them, and then they have no more need for the merkle sum tree
nabu has quit [Ping timeout: 252 seconds]
<andytoshi>
the question is, without the tree, given only the compact chain, can you verify that this is the real chain (no) and does this affect the security model (i don't know)
<tromp_>
we're talking about a chain of only a few hundred blocks right?
<andytoshi>
yes
<andytoshi>
...but a few hundred, out of hundreds of thousands of blocks
<tromp_>
but the tree is trivially computed from the chain?!
<tromp_>
from the compact chain that is
<andytoshi>
yes, but the chain doesn't commit to this tree
rusty2 has quit [Ping timeout: 268 seconds]
<tromp_>
it does implicitly through all the commitments to prev compact block
<andytoshi>
how does it commit to the prev compact block without committing to all of them?
<tromp_>
i suggest prev compact block hash as a header field
<tromp_>
instead of compact merkle sum tree hash
<andytoshi>
you can't tell what the previous block us until you've mined the current one
<tromp_>
yes, that would be a field like nonce that is not hashed for header-hash purposes
<bsm117532>
Hmmm...one generalization beyond using min() instead of sum() to reduce variance is to use an Order Statistic. But that's the "kth smallest value" and requires you to have all k values... :-(
<tromp_>
uhm wait
<tromp_>
i'm talking nonsense
<andytoshi>
:)
<tromp_>
ok; i see the need for the merkle tree now
MaxSan has joined #bitcoin-wizards
<tromp_>
i was thinking of how, with an asymmetric pow, you still add some witness data to the header after having mined a block
<andytoshi>
ah, yeah, i'm pretty sure that doesn't work here
<tromp_>
so you could also add the prev compact block there
<tromp_>
and consider the block valid only if you picked the correct block hash there
<tromp_>
wldn't that preserve the security model?
<andytoshi>
yes, but it does force miners to gamble
<tromp_>
they just check that what they build on computed it correctly
<tromp_>
what is there to gamble?
<tromp_>
miners should not build on invalid blocks...
MoALTz has quit [Quit: Leaving]
abpa has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<andytoshi>
i'm not understanding you, are you talking in the context of an asymmetric pow? how does this change the fact that you don't know the "real difficulty" of a block until it's mined?
<tromp_>
yes, consider e.g. cuckoo cycle as pow
<tromp_>
only after mining do you have the cycle which needs to go into block header as well
chjj has joined #bitcoin-wizards
<tromp_>
oh wait, that's still part of the hashcash hash
<tromp_>
anyway, it's true you only know compact predecessor after mining
rusty2 has joined #bitcoin-wizards
<tromp_>
if you add it to the header, then blocks need to commit to predecessor by hashing both its header and this extra field
<tromp_>
i guess having both a header hash and extended-header hash is too messy:(
quietbeast has joined #bitcoin-wizards
quietbeast has quit [Ping timeout: 240 seconds]
rusty2 has quit [Ping timeout: 240 seconds]
<tromp_>
or maybe it's clearer to refer to the full header (including the prev compact link) and the truncated header that is used for hashcash
helo_ is now known as helo
JHistone has joined #bitcoin-wizards
chjj has quit [Ping timeout: 252 seconds]
Joseph__ is now known as NewLiberty
Sosumi has quit [Quit: Bye]
abpa has joined #bitcoin-wizards
Ylbam has joined #bitcoin-wizards
abpa has quit [Ping timeout: 255 seconds]
EvilHero_ has quit [Ping timeout: 258 seconds]
Aranjedeath has joined #bitcoin-wizards
Noldorin has joined #bitcoin-wizards
chjj has joined #bitcoin-wizards
JHistone has quit [Ping timeout: 276 seconds]
Dav2 has quit [Remote host closed the connection]
_whitelogger has joined #bitcoin-wizards
<tromp_>
so if you're building block i+1, then you compute the compact chain successor of block i, let's call it block j, and you commit to block j+1
nullfxn has quit [Quit: leaving]
<tromp_>
this is fully equivalent to commiting to the compact merkle sum tree but perhaps conceptually simpler
tromp has quit [Read error: Connection reset by peer]
tromp has joined #bitcoin-wizards
<tromp_>
well, not fully equivalent, since it doesn't compute the effective difficulties