wumpus changed the topic of #bitcoin-wizards to: This channel is is for discussing theoretical ideas with regard to cryptocurrencies, not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja
www has quit [Ping timeout: 256 seconds]
hearn has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
elastoma has quit [Ping timeout: 246 seconds]
c0rw|away is now known as c0rw1n
droark has joined #bitcoin-wizards
jtimon has quit [Ping timeout: 252 seconds]
elastoma has joined #bitcoin-wizards
ryanxcharles has quit [Ping timeout: 246 seconds]
Tiraspol has quit [Ping timeout: 256 seconds]
Tiraspol has joined #bitcoin-wizards
Tiraspol has joined #bitcoin-wizards
nullbyte has quit [Read error: Connection reset by peer]
nullbyte has joined #bitcoin-wizards
wallet42 has joined #bitcoin-wizards
FranzKafka has joined #bitcoin-wizards
NewLiberty has joined #bitcoin-wizards
belcher has quit [Quit: Leaving]
andytoshi has quit [Read error: Connection reset by peer]
andytoshi has joined #bitcoin-wizards
nullbyte has quit [Ping timeout: 246 seconds]
nullbyte has joined #bitcoin-wizards
erasmospunk has quit [Remote host closed the connection]
Dr-G has quit [Disconnected by services]
Dr-G2 has joined #bitcoin-wizards
PRab has joined #bitcoin-wizards
Giszmo has joined #bitcoin-wizards
wallet42 has quit [Quit: Leaving.]
wallet42 has joined #bitcoin-wizards
wallet42 has quit [Client Quit]
c0rw1n is now known as c0rw|zZz
flower has quit [Quit: -]
c-cex-yuriy has quit [Quit: Connection closed for inactivity]
p15x_ has joined #bitcoin-wizards
p15x has quit [Ping timeout: 248 seconds]
sergiohlb has quit [Remote host closed the connection]
_whitelogger_ has joined #bitcoin-wizards
shen_noe has joined #bitcoin-wizards
shen_noe has quit [Client Quit]
void_hero has quit [Quit: Lost terminal]
akrmn1 has joined #bitcoin-wizards
superobserver has joined #bitcoin-wizards
akrmn has quit [Ping timeout: 264 seconds]
TheSeven has quit [Ping timeout: 246 seconds]
TheSeven has joined #bitcoin-wizards
__FranzKafka__ has joined #bitcoin-wizards
FranzKafka has quit [Ping timeout: 256 seconds]
shreyas__ has joined #bitcoin-wizards
ryanxcharles has joined #bitcoin-wizards
<dgenr8>
GreenIsMyPepper: what is the bitcoin address referred to in the LN paper? is it the multisig address funded in receiver's payment hub channel?
sadoshi has joined #bitcoin-wizards
p15x has joined #bitcoin-wizards
p15x_ has quit [Ping timeout: 264 seconds]
<bramc>
On the plus side, the likelihood of miners ever getting their shit together to censor particular utxos is seeming extremely unlikely
<bramc>
In all seriousness, what implications does the current mess have on possible future 'lightly' backwards incompatible changes?
<bramc>
We can also infer that miners don't have their shit together to introduce incompatible honey-pot transactions into the network or we'd be seeing a lot more forks.
shen_noe has joined #bitcoin-wizards
shen_noe has quit [Client Quit]
ThomasV has joined #bitcoin-wizards
<leakypat>
bramc is demonstrated to me on how use of the system is reliant on having access to an up to date full node
<leakypat>
Miners do a specific thing and with out an up to date full node, can't be held to account
<CodeShark>
the problem is that they lied
<leakypat>
It also shows they will take shortcuts regardless, if it gives them an advantage
<CodeShark>
we should have never gone to BIP66 95%
<leakypat>
Full nodes first?
<leakypat>
So soft fork= hard fork?
<CodeShark>
or rather, we should have...because I'm glad this came out now...and I think BIP66 is a good thing
<CodeShark>
but the soft fork process completely broke because miners vote for rules they don't even enforce
<CodeShark>
it wasn't supposed to be a hard fork
<CodeShark>
lol
<leakypat>
What percentage of full nodes need to upgrade though ?
<CodeShark>
it was a two phase transition
<leakypat>
Blockchain .info accounts for n% of wallets and are on 0.7.0
<CodeShark>
a couple days ago we hit the second goal
<CodeShark>
of 95%
<CodeShark>
after 95%, miners are supposed to reject v2 blocks
<CodeShark>
but the miners voted for the change even though they were not checking this at all
<dgenr8>
supporting v3 was easy. 1-byte change.
* dgenr8
has updated his working thesis to "nobody really knows how LN is supposed to work"
<leakypat>
So we should have waited until we have proof no miner has built a v2 block for n blocks ?
<CodeShark>
the rule was 950 of the last 1001, I believe
<CodeShark>
v3 blocks
<leakypat>
Before full nodes start rejecting v3?
<CodeShark>
yes
<leakypat>
Makes sense
<leakypat>
Rejecting v2 sorry
Giszmo has quit [Quit: Leaving.]
<CodeShark>
the scary part is if they can't even enforce a one line if statement, how can we trust them to enforce anything at all :p
<leakypat>
I think he lesson is they should never be trusted
<leakypat>
The
<CodeShark>
that's the way it was supposed to work - if wallets and explorer apps actually did proper validation it wouldn't be our problem
<CodeShark>
it would be the dumb miners' problem
<CodeShark>
but wallets and explorer apps don't do proper validation either...because they trust miners to do it...derp
<leakypat>
Quite honestly not enough companies engage with core development
<leakypat>
Or follow it
<leakypat>
They see it as an API they call
<bramc>
technically bip66 is a hard fork, but it's a trivial fix of no feature consequence whatsoever
<leakypat>
The problem is that there are companies like bc.i who probably can't upgrade their systems
<leakypat>
And could never be relied on too
<gmaxwell>
bramc: no it isn't. Go spin up 0.8 ... syncs the chain just fine. But BIP66 is in force. No hardfork.
<CodeShark>
bramc: is it? there are transactions that are invalid before BIP66 that are now valid
<CodeShark>
?
<bramc>
er, well, maybe we need better vernacular. 'hard' fork means that older clients wouldn't accept it, which bip66 is clear of
<leakypat>
backwards compatible is the definition I believe
<bramc>
So I was wrong. What's the term for a change where older dumb miners might make invalid blocks?
mjerr has joined #bitcoin-wizards
<bramc>
That's somewhere between hard fork and soft fork
<bramc>
I mean, it's a soft fork, but it's more like a pillow than butter
<leakypat>
A rough fork
<jcorgan>
it's like semi-soft cheese
<leakypat>
A spork?
<bramc>
There are a few more things in the pipeline which are also rough forks, and it's fairly likely that miners will learn from this one that they can fuck over other miners by introducing subtly invalid transactions into the network
<bramc>
So the next one might be a lot rougher
<CodeShark>
so what's the solution?
<CodeShark>
I mean, I know what the long-term solution is
<CodeShark>
what's the short term solution?
bedeho has joined #bitcoin-wizards
<bramc>
Uh...
<bramc>
I'm open to suggestions
<bramc>
Doing a soft fork where it's trivial to tell when old stuff is invalid seems about as easy as it gets
<CodeShark>
we should definitely place a moratorium on soft forks for the time being...that goes without saying :p
<jcorgan>
name and shame (where possible)
<leakypat>
Miners first
<leakypat>
Name and shame clean up etc
<CodeShark>
miners are in a way easier to target because a few big pools are enough to sway consensus
<leakypat>
Once proof exists that no invalid block has been built for n time
<CodeShark>
(which isn't necessarily such a good thing)
<leakypat>
Then nodes upgrade?
<CodeShark>
but the nodes that really should be validating are wallets, IMHO
<leakypat>
Major companies get big warnings
<bramc>
Maybe spv changeover should be integrated and wallets should be able to defend themselves against busted services?
<leakypat>
With lists confirming upgrades are done or not made public
<jcorgan>
though the constant pointing out of bc.i's incompetence doesn't seem to have made any difference
<leakypat>
But with a hard deadline
<leakypat>
jcorgan: you can only do so much , one has to wonder what hey will do in a hard fork
<leakypat>
If they are stuck on 0.7.0
<bramc>
What you really want to do is burn the miners - make the ones who aren't doing it right *consistently* lose
<gmaxwell>
presumably they wildly misestimate what work dealing with it will take and think they can just make a small patch.
<bramc>
Like, right now they only lose if they happen to build on an invalid block which somebody else made. If they had to proactively set a flag indicating that they'd upgraded that would get their attention
<bramc>
Although of course they might set the flag without fixing anything, but at least it would burn the ones who literally did nothing
<CodeShark>
bramc: the solution ultimately needs to be economic...game theory
<CodeShark>
but...
<CodeShark>
we're far from that :p
<leakypat>
A lot of the block explorers probably use pre 0.10 because they expect the blocks to arrive in order
<CodeShark>
if the game theory is right people will do what needs to be done by themselves
<CodeShark>
but...
<jcorgan>
CodeShark: true, but sometimes the feedback loops have long delays
zooko has joined #bitcoin-wizards
<bramc>
gmaxwell, Not sure what you mean, isn't support for the new thing a fairly trivial patch?
<jcorgan>
for miners, it seems the financial penalty of losing coinbase revenue after orphaning will at least put real pressure on them
<CodeShark>
jcorgan: if it happened more often perhaps that would be true
<CodeShark>
if it only happens once every few months many miners might still carry on with their current strategy
<bramc>
Yeah the amount of financial burn they've gotten this time hasn't been all that much. If they had to put in a flag that would get them consistently orphaned
<bramc>
Unfortunately many of the miners who are causing problems also don't accept any transactions, so keeping bad transactions circulating for a while just to fuck with them wouldn't help
<bramc>
That would be fun though - have full nodes circulate bad transactions for a few days after the changeover
<jcorgan>
hmm
<CodeShark>
ideally this would happen more often...but only the nonvalidating miners would suffer
<bramc>
Or somebody could 'altruistically' connect directly to all the full nodes they could and send some out of date transactions
jae_ has quit [Remote host closed the connection]
<bramc>
CodeShark, Remember that this isn't actually a good state of affairs, as much fun as enacting vengeance might be
<CodeShark>
bramc: this isn't about vengeance - it's just economics :)
<jcorgan>
not vengeance, more like proactive antiseptic cleansing
<CodeShark>
we want those who aren't following the rules to be basically ignored by the rest of the network
<CodeShark>
but...lol
<jcorgan>
what's to stop someone from doing it now?
<jcorgan>
btw, did anyone do a diff on the chain tips after the fork resolved? were there any actual TXes affected?
<CodeShark>
I don't have an old node running
<CodeShark>
I'd have to grab the bad blocks from somewhere
<jcorgan>
i think one of the bad blocks had ~1600 transactions
p15x has quit [Ping timeout: 256 seconds]
<bramc>
It only takes one bad transaction to invalidate the whole thing
<bramc>
Presumably there were other forks which got orphaned fast so nobody noticed
<jcorgan>
what i mean is, i'm hoping that the valid chain mined all those from their own mempool, so there were no actual TXes that got confirmed on the bad chain that didn't also get confirmed on the valid one
<bramc>
jcorgan, *shrug* shit happens
<bramc>
Although that is an interesting question. Presumably there had to be a confused wallet which issued the bad transaction in the first place
<CodeShark>
could have been mempool backlog, no?
<CodeShark>
how deep did the fork go?
<jcorgan>
this one was 3 blocks
<jcorgan>
the first one was 6
<gmaxwell>
this most recent event had a lot of transactions.
<bramc>
jcorgan, The one yesterday was 6 blocks
<CodeShark>
which of the three blocks was the 1600 tx one?
<bramc>
How smart are the wallets about the changeover?
<CodeShark>
not very
<CodeShark>
actually, not at all for the most part
<bramc>
Were wallets supposed to have all changed over a while ago to avoid problems?
<CodeShark>
but what's worse is that unless you run a full validator you still cannot invalidate a block
<gmaxwell>
of the three two had lots of transactions.
<CodeShark>
this was a special case, bramc, where the invalidation could have been done with headers only
<CodeShark>
in general it's not possible
<bramc>
It seems like it should be three step: 1) miners start accepting the new transaction type 2) wallets start defaulting to the new type 3) miners start orphaning the old type
<gmaxwell>
bramc: bitcoin core never saw these blocks, btcd never saw these blocks. Everything else did.
<bramc>
gmaxwell, That would seem to imply that they were propagating quite well
shesek has quit [Ping timeout: 246 seconds]
<gmaxwell>
the blocks had 1142/2315/1599 transactions respectively.
<CodeShark>
wow - that's quite a few
<jcorgan>
that's why i'm wondering what the actual TX diff was
<CodeShark>
what were the block numbers?
<gmaxwell>
yea, I hoped someone would do that while I was out.
<jcorgan>
me too :-)
<bramc>
followed by 4) regular full nodes stop distributing no-longer-valid transactions as part of the program to fuck with miners who haven't gotten with the program :-)
<gmaxwell>
363999' 363998' 363997'
<bramc>
gmaxwell, Any idea how many of the transactions were invalid?
<CodeShark>
I would have if I had an older version node running - if anyone has the bad blocks I can do a diff
<gmaxwell>
bramc: almost certantly none of them.
<gmaxwell>
bramc: we haven't had a non-canonical signature at all in the chain for over three months.
<bramc>
gmaxwell, So, uh, what caused the invalidation?
<gmaxwell>
oh ha! I must be incorrect.
<CodeShark>
bc.i still has the three blocks apparently...
<CodeShark>
but...
<CodeShark>
lol
<gmaxwell>
bramc: I thought 7' was v2, but its tagged v3..
<CodeShark>
I don't really want to use that as my source for data
<gmaxwell>
bramc: so it must actually have an encoding violation.
<bramc>
(Bram Doesn't Do Ops, hence my not having any info on blocks to share)
<gmaxwell>
CodeShark: they don't have a way to fetch the raw block AFAIK.
<bramc>
gmaxwell, It would seem like a strange coincidence for a new violation to show up right when they become illegal, wouldn't it?
<CodeShark>
gmaxwell: you can still grab the tx hashes
<gmaxwell>
oh sorry, I was looking at the wrong tab.
<gmaxwell>
bramc: 7' is just tagged with v2, thats the only invalidity.
<CodeShark>
in the worst of cases you can just scrape the site :p
<bramc>
gmaxwell, Uh... there's only so much you can do to save people from themselves
<gmaxwell>
bramc: the whole scheme is _intended_ to cause some orphaning, to get the stragling hashpower to upgrade or give up.
<gmaxwell>
bramc: the reason that there have been no violations is because miners have been rejecting those transactions since 0.8. People actually still create them all the time, in rather large volume.
<bramc>
Here I am, foolishly offering suggestions on the assumption that this scale of bad couldn't have happened if reasonable measures were followed, but no, it was all done right to begin with
<gmaxwell>
There are old versions of armory and bc.i's mobile wallet, for example, that pump them out. But then they don't get mined, and people go change their software.
<bramc>
that's... depressing
<bramc>
People still have this weird thing where they think end users get some advantage out of their software not being on autoupdate. It's nuts.
CodeShark_ has quit [Read error: Connection reset by peer]
<gmaxwell>
bramc: yea, well we did have issues with the BIP16 deployment that we learned from. E.g. we made sure in BIP66 to phase out the transactions in question years before; and completely eliminated them from the network long before... then phased in with a very high threshold... I'm certantly interested in knowing what more we could do.
CodeShark_ has joined #bitcoin-wizards
<gmaxwell>
I know a few things we could improve on-- e.g. we overly focused on miners and not enough of other bits of public infrastructure.
<CodeShark>
so the three blocks are 3ae1223... 63f97f... and 12dbd4... ?
<bramc>
Yes the public infrastructure clearly could use some help, but I for one am at a loss as to any major process improvements to be had
<bramc>
Set the 'lazy and incompetent' bit to false?
<jcorgan>
CodeShark: i think 12db is the first valid one, could be wrong though
<gmaxwell>
CodeShark: yea, and thats a weird way to truncate the hashes, I use the trailing bytes! :)
<CodeShark>
yeah, probably better to use the trailing bytes
<leakypat>
Ok, so we knew that there were v2 blocks still being produced but had assurances from 95% that they wouldn't be built on ?
<bramc>
leakypat, Yes that's the crux of the problem, the orphaning would have had no problems if they hadn't been lying
<gmaxwell>
leakypat: right, the v2 block at 363997' is unsurprising and harmless.
<leakypat>
gmaxwell: so not much more you can do on the miners side then, public lists of infrastructure not upgrading/ incompatible
<bramc>
Also, with regards to the transactions which got orphaned, I wonder how far back wallets go when noticing reorgs, and it might be good if somebody as a public service would collect those old orphan blocks and reintroduce their transactions (I think that was the conversation y'all were just having and I wasn't understanding)
<leakypat>
although hard to fully verify, public assurances from the main infra players that they have upgraded would at least focus them
<bramc>
Does bitcoin core keep old transactions in a stale mempool and bring them back in the case of a reorg?
<gmaxwell>
leakypat: basically 95% means that 5% will be producing orphans which is only a small multiple worse than the levels that happen ordinarily due to latency, they'd be unlikely to manage a two block reorg (0.05^2), plus the 5% presumably would drop rapidly once the orphaning started.
<bramc>
Yeah for it to get up to 6 means that something is busted
<gmaxwell>
bramc: every bitcoin node is a service which reintroduces transactions. They're returned to the mempool when disconnecting the old block.
<jcorgan>
yeah, in part this only grabbed our attention because the first instance was two miners that made up 40% of hashrate
<bramc>
gmaxwell, Well gee, how am I supposed to offer suggestions in the midst of basic competence already being in place?
<gmaxwell>
bramc: so the only reason there should be transactions that fell out of the chain is because they were conflicted via double spending at the time they were initially mined.
<bramc>
Note to self: Don't make a new cryptocurrency without at least using the existing bitcoin codebase as a reference
shesek has joined #bitcoin-wizards
<jcorgan>
so it went on 6 blocks before the system routed around it
<zooko>
bramc: ;-)
<bramc>
If we try to estimate how many miners are being 'bad', there needs to be separate estimates of how many miners are/were creating bad blocks vs. not doing their necessary validation
<zooko>
Yeah, you core folks do impressive work.
<jcorgan>
zooko: heh, i've staked my retirement on that fact :-)
<jcorgan>
no pressure guys
<bramc>
If we figure that it took a day for something bad to happen, and that 95% of new blocks were valid, that means... after 5 bad blocks got made, one of them got to 6. That seems highly implausible. If it were that bad it wouldn't have been able to self-heal at all
<gmaxwell>
bramc: its too little data to get a good esimate.
<gmaxwell>
bramc: well we can distinguish: non-upgraded blocks are v2, while lacking-validation-blocks are "v3".
<gmaxwell>
each of these two incidents have been a v2 and then a run of v3. There have also been a couple v2 orphans in the last two days that didn't get extended.
<bramc>
gmaxwell, Yes it's possible that the 6 was simply unlucky, but it seems likely that a fair number of miners kept making v2 blocks even after they voted for v3
<gmaxwell>
e.g. there was one right before the run of 6.
droark has quit [Quit: ZZZzzz…]
<gmaxwell>
bramc: yea, no, we know for a fact that it was on the order of half the hashpower mining without validating there.
<gmaxwell>
but we also know that some of that have 'improved' their behavior.
<bramc>
gmaxwell, That's what I was afraid of.
<gmaxwell>
Though improved may only mean that they're also validating enough to reject v2 now. :)
<bramc>
In which case that run of 6 could easily have gotten way, way longer
<gmaxwell>
(but perhaps not a v3 block with a invalid transaction in it)
<bramc>
gmaxwell, That should be easy enough to test - make a peer which connects to as many full nodes as it can and drips bad transactions to them
<gmaxwell>
bramc: yes, it only was as short as it was because: it was mining blocks at ~half rate (due to the other half of the network being on the other side), and because the major operators responsible for that were able to be prodded.
<leakypat>
So we can't rely on miners to validate?
<leakypat>
Thy would rather hope for the best and get a speed advantage
<gmaxwell>
bramc: not so useful; because even non-upgraded nodes will not mine invalid txn.
<bramc>
Oh right, hmm
<gmaxwell>
I mean someone could burn ~25 bitcoin and intentionally create such a block.
<bramc>
It's unlikely that anybody has enough mining power that it's worth them sabotaging everybody else
<leakypat>
But also we can't reliably tell how many full nodes on the network have upgraded
nullbyte has quit [Ping timeout: 248 seconds]
<gmaxwell>
4ish months ago there was a miner who was mining the invalid txn-- tracked him down, he was on current software but someone seems to have 'optimized' out all his signature validation. (he fixed it right away).
<gmaxwell>
so it's possible that there is another genius like that out there and flooding invalid txn will result in a block.
<jcorgan>
seems like a useful prophylactic measure
<phantomcircuit>
leakypat, we should figure out a way to make it so miners on margin make more money by validating blocks than by not
ThomasV has quit [Ping timeout: 276 seconds]
<gmaxwell>
Then again, some kind soul on reddit (of the sort of typical kind souls on reddit) already accused Peter Todd and of having created an invalid transaction (even though there had been none) in order to make a point about blocksize... so uh.. yea, I'm not going to do it.
<phantomcircuit>
which probably means they need to soft fork a massive drop in blocksize into place
<phantomcircuit>
which is maybe a bit circular...
<phantomcircuit>
fix the soft fork issues with a softfork
<phantomcircuit>
goto 1
nullbyte has joined #bitcoin-wizards
NewLiberty has quit [Ping timeout: 276 seconds]
<leakypat>
I see, hence bramc suggestion of getting invalid transactions into the miners blocks?
<leakypat>
They would have to check each one
<phantomcircuit>
leakypat, that probably wont work
arubi_ has quit [Quit: Leaving]
<gmaxwell>
bramc: a possible argument is that perhaps we've done soft-forks too infrequently... leaving people poor at handling them.
<phantomcircuit>
i dont think any of the major miners are including transactions they haven't verified
<bramc>
gmaxwell, They might not be much better after this mess, nobody got burned all that bad
<zooko>
gmaxwell: my trusted colleague Brian Warner recently said this to me. Something like: things that happen less often than every few weeks will fail when you try to make them happen.
<zooko>
gmaxwell: on the topic of software/protocol upgrades.
<leakypat>
Yeah, there is a monthly upgrade of the libraries where i work, regardless
<leakypat>
Everyone has to sync up
<leakypat>
With sometimes useless calls with dependent parties, but never the less, the process works
<zooko>
*nod*
<gmaxwell>
I worry that there are already a lot of marginal participants; too much throughput will push them out-- that isn't a good way to be inclusive.
<zooko>
Hm.
<zooko>
Isn't that sort of the opposite of what you were just saying?
<zooko>
If "marginal" = "careless/inattentive/etc."
<leakypat>
So there would be a monthly Bitcoin core release call of some kind
<leakypat>
Sounds logistically a nightmare :/
<gmaxwell>
zooko: hm not quite, you can be resource strapped but still handle a big upgrade once a year.
<gmaxwell>
But not be able to handle one once a month.
<zooko>
Hm.
<leakypat>
You would think the 700m vc funding could throw in a few coordination heads
<gmaxwell>
and sure it looks like marginal == inattentive; but thats only because we only see the failures.
<leakypat>
(By heads I mean head counts)
<bramc>
leakypat, You didn't think any of that VC funding would go towards making the ecosystem healthier, did you?
<leakypat>
bramc: my naïveté is bottomless
<bramc>
A true malleability fix might be dicier than this, because it causes some utxos to be spent which older full nodes don't realize are already spent
<gmaxwell>
some of it has-- but really, who are you going to fund to do that?
<jcorgan>
gee, if only there were an industry consortium around bitcoin that could take on these sorts of longer term thinking, ecosystem related issues, and be funded by lots of ecosystem participants as a way of helping ensure the "system" will support their own more narrower goals
<bramc>
gmaxwell, A fair number of the entities which aren't doing validation properly probably have investment
<bramc>
jcorgan, I'm sure one could scrounge up a few thieves and pedophiles to sit on the board of such an entitiy
roconnor has quit [Quit: Konversation terminated!]
<CodeShark>
gmaxwell: I don't really have a good setup to do a diff on the transactions - but if you want me to I can scrape them
<CodeShark>
I have the tx hashes in files
<jcorgan>
in all seriousness, though, individual ecosystem particpants usually have too narrow a view to directly invest in "greater good" type things, but are often willing to set aside a portion of their investment capital to fund an organization that would focus exclusively on those type of things, as long as everyone else were putting money in the pot as well
<CodeShark>
if you have a node with txindex you can easily see which ones got dropped
<CodeShark>
unfortunately, I don't have such a node accessible atm
<jcorgan>
but bitcoin has never seen an organization that actually fulfills that role
<CodeShark>
the bitcoin foundation doesn't count? :p
<jcorgan>
lol
<gmaxwell>
CodeShark: if you've got the tx hashes in the orphaned blocks, skipping the coinbase txn (for obvious reasons) give to me and I can quickly check which made it into the main chain.
<CodeShark>
one of the blocks apparently is empty
<bramc>
So it's fair to say that the 95% which voted were fairly consistent about producing new valid blocks on schedule, but roughly half of them (weighted by mining power) aren't/weren't doing validation properly
<CodeShark>
6053a7b0d5a2 appears to be empty
p15x has joined #bitcoin-wizards
<bramc>
It's hard to see how to avoid the moral hazard here. Validating causes at least a little bit of latency, which costs something, and the ones who aren't validating hardly got burned.
<CodeShark>
I can send you the other two minus the coinbase
<bramc>
On the other hand, since headers-only validation could have handled this just fine, maybe that would be enough and should be what's emphasized in the future.
<gmaxwell>
CodeShark: gimme gimme
<CodeShark>
ok, sending an email...one sec
<gmaxwell>
bramc: nah, just lucky in this case. in the BIP-16 change there were actual invalid txn mined.
<leakypat>
I'm sure there are volunteers out there who would coordinate things- it's a core dev stakeholder coordinator role
<leakypat>
They need to be able to understand stuff, but really good at bugging the crap out of people and tracking things
<jcorgan>
sometimes volunteers emerge with the time/effort/willingness to do that
<gmaxwell>
leakypat: if things weren't coordinated here it was only because it wasn't thought of, I mean, we got 95% (lol) hashpower in three months onto this; there was significant effort coordinating with miners, but they didn't exactly volunteer "oh btw, we're not actually validating"
<bramc>
Core devs tend to not be so big on that whole 'talking to people' thing
<leakypat>
gmaxwell: I meant more doing regular calls with the big companies etc
<bramc>
gmaxwell, 'We're only 5% of all hashpower, us not validating hardly breaks anything'
<CodeShark>
gmaxwell: sent
<leakypat>
don't get me wrong , I think it is the companies themselves that should be being proactive
<gmaxwell>
I don't. I mean, I don't think "should" matters.
<gmaxwell>
nothing good gets done by spending too much time worrying about should.
ThomasV has joined #bitcoin-wizards
<jcorgan>
a company might not invest $X directly in something that only has long term benefit, because it puts them at an immediate disadvantage to their competitors, but if if "everyone" were to invest $X in a common consortium type organization that look after those type of things, then they'd all benefit and not suffer relative to one another
<jcorgan>
the trick is in organizing the whole thing
nullbyte has quit [Ping timeout: 248 seconds]
<CodeShark>
bramc: headers-only validation would still miss 99.999% of problems :p
nullbyte has joined #bitcoin-wizards
<CodeShark>
this was one of the very few exceptions where it would have actually sufficed
NewLiberty has joined #bitcoin-wizards
<CodeShark>
until someone included a bad DER in a v3 block :p
<CodeShark>
the fundamental problem is quite simple - it's too costly to verify, it isn't sufficiently costly not to
<CodeShark>
that's really what it boils down to :)
<CodeShark>
fix those things and as if by magic miners will miraculously stop doing this crap
<CodeShark>
I still think that ultimately wallets are the most important validator nodes
<CodeShark>
and ironically these are the nodes that are least likely to invest in full validation
<CodeShark>
relay nodes are also important - it would be better to err on the side of relaying invalid data than on not relaying valid data...and have the wallet nodes do final validation
<CodeShark>
but...it's a pipedream :p
<gmaxwell>
CodeShark: well there is a counter argument that relaying invalid data increases exposure for those behind you.
<CodeShark>
gmaxwell: true...but we probably shouldn't be relying on that :)
<zooko>
gmaxwell: +1 on 'down with "should"'
<gmaxwell>
sure sure, but its an argument against making it worse. The other is that its easy to open up DOS vectors that way.
zooko has quit [Quit: goodnight folks]
Mably has joined #bitcoin-wizards
<CodeShark>
gmaxwell: ideally, relay nodes would also validate...and validate correctly. but wallets not validating correctly opens up even more attack vectors
<CodeShark>
and from an incentives perspective, the wallet node operators stand to lose a lot more from improper validation
<bramc>
Technically validating just creates a latency problem. You can accept new blocks immediately and start mining them, then invalidate in the background. But that requires some actual engineering
<bramc>
Like, an engineer might have to spend a few days or maybe even a few weeks getting it to work right.
<gmaxwell>
yea, a perfectly reasonable thing to do would be to start early but not relay until you've caught up the validation; but if you're getting it wrong the failure is silent.
<CodeShark>
it's unenforceable, though
<CodeShark>
the only way to enforce it is economically
<bramc>
gmaxwell, Or if you want to be a jerk about it, relay immediately but validate in the background and throw out the bad one in favor of a good one if it gets invalidated
<CodeShark>
many miners probably would run something like that if it didn't eat into profits...but it's unenforceable...and I think it's safe to say that most miners will not do this customization correctly :p
<CodeShark>
so it would have to come prepackaged
DougieBot5000 has quit [Quit: Leaving]
<CodeShark>
did you finish compiling the double-spend list, gmaxwell/
<CodeShark>
?
<gmaxwell>
CodeShark: no, turned out I'd broken my w/ txindex nodes to observe the fireworks the other day, reindexing now. :(
<CodeShark>
derp :p
<gmaxwell>
I can share the list with someone else if they have a txindex handy?
arubi_ has joined #bitcoin-wizards
cornusammonis has joined #bitcoin-wizards
<CodeShark>
I used to always run a synched database with every single possible index you might find useful...but I stopped doing that a long time ago
<CodeShark>
I even indexed tx inputs that were in the same set of transactions
<CodeShark>
I'm considering revisiting that project...but I need a backend that is more efficient with insertions
cornus_ammonis has quit [Ping timeout: 256 seconds]
CoinMuncher has joined #bitcoin-wizards
<CodeShark>
it's really nice to be able to do queries like "grab me all the dependencies back n generations from this transaction"
<CodeShark>
or "find whether input X connects to output Y via some chain"
<CodeShark>
lol
sy5error has quit [Remote host closed the connection]
shen_noe has joined #bitcoin-wizards
<CodeShark>
it would be nice to store even invalid stuff for analysis
Mably has quit [Ping timeout: 256 seconds]
p15x has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
orperelman has joined #bitcoin-wizards
kmels has quit [Read error: Connection reset by peer]
<bramc>
CodeShark It's not unreasonable to say that relay-then-validate is best practice. It results in optimized profits for everyone doing it, and maximizes fucking over of miners who haven't gotten with the program.
<bramc>
I'm not joking
nullbyte has quit [Ping timeout: 248 seconds]
<gmaxwell>
bramc: well the screwing over is already optimized by matt's relay network-- as it actually does SPV only validation before relaying a block.
<gmaxwell>
as its users are all parties we thought were (mostly) relaying for themselves.
<bramc>
gmaxwell, Using the relay network is too much work, easier to turn off validation
nullbyte has joined #bitcoin-wizards
<gmaxwell>
bramc: actually 'turning off validation' for these guys took a fair amount of software development.
<gmaxwell>
But one thing I've come to learn is that lazy has many forms and often people find using someone else software or asking some questions about it to require more energy than spending several days/weeks rewriting from scratch.
<bramc>
gmaxwell, That I can actually relate to, although it's gotten a lot better in recent years
<bramc>
For example, years ago I wrote my own version control system. Now I'd merely like to dive deep into git and fix its fucking rebase implementation
<gmaxwell>
yea, sure, sometimes myself too... though reconizing that its (sometimes at least) a kind of lazyness is interesting.
<bramc>
gmaxwell, My team at work is writing their own GUI library from scratch. Woo mobile development. They tried using another one, and kept sending in patches to improve its performance until eventually they decided to just rewrite all of it.
<bramc>
At least these days collections frameworks are fairly dialed in
<bramc>
I'm not holding my breath on not having to do GUI code any more though. Web development works now, but mobile sucks, and in the not too distant future we're going to have to work with augmented reality...
drwin has joined #bitcoin-wizards
www has joined #bitcoin-wizards
<CodeShark>
there's that old article sipa once sent me about how it's harder to read other people's code than write one's own
<CodeShark>
it's an ancient article...but it's still quite relevant
<CodeShark>
might need some updating of the actual product names...but otherwise the narrative still works :)
__FranzKafka__ has quit []
<CodeShark>
there are somethings I disagree with, though - like "It's important to remember that when you start from scratch there is absolutely no reason to believe that you are going to do a better job than you did the first time. "
<CodeShark>
I find it to be the case that whenever I rewrite anything I always do a better job the second time
<CodeShark>
but that's not quite the same as rewriting by myself what a bunch of other people wrote
<CodeShark>
rewriting something that I originally wrote myself is entirely different
<CodeShark>
because I bring all the knowledge and experience with me into the rewrite
<CodeShark>
the real trick isn't rewriting everything from scratch - but compartmentalizing the ugliness :)
<CodeShark>
especially relevant to bitcoin :p
<bramc>
CodeShark I fundamentally disagree with the thesis of that essay. In practice people are way too conservative about when to just rewrite. You need a mature senior team to be able to do a rewrite and actually deliver it though.
<CodeShark>
yes, there are many instances where the advice in that essay does not apply at all
nullbyte has quit [Ping timeout: 256 seconds]
<CodeShark>
there's a huge difference between reimplementing someone else's thing and designing something superior
<CodeShark>
if you are able to really design something superior, go for it :)
nullbyte has joined #bitcoin-wizards
<CodeShark>
but the superior stuff should probably be more than just refactoring
<CodeShark>
as in, inventing a new product
<bramc>
Not always. If you need to refactor more than about a third of the codebase it's generally faster and better to rewrite from scratch.
<gmaxwell>
sometimes software seems complex to you because its solving hard problems and you don't fully understand it or all the issues it must solve; and sometimes its complex just because its crufty. It's easy to mistake the former from the latter, and important to avoid that error.
<gmaxwell>
otherwise you can't make progress because you're continually repeating the past mistakes.
<CodeShark>
well put :)
<bramc>
It's remarkable how senior a team has to be to keep a codebase from becoming a disaster over time
<bramc>
merciless refactoring is necessary just to keep things from spiraling out of control, as part of ordinary maintenance
Aquentin has quit [Ping timeout: 252 seconds]
<bramc>
My team had a real discussion once about order in which to do things, and it boils down to (1) code reviews (2) debugging (3) everything else
<CodeShark>
how big is your team, bramc?
c-cex-yuriy has joined #bitcoin-wizards
<bramc>
CodeShark Five people total
<CodeShark>
good size :)
<bramc>
Yeah it really helps to keep things small
<CodeShark>
engineer parallelization definitely has more than log N overhead :p
<CodeShark>
it's more like quadratic overhead
<CodeShark>
what are you guys working on right now?
<bramc>
p2p live video streaming, same thing I've been working on for years
<CodeShark>
ah - I have a video codec issue you might be able to help with :)
<CodeShark>
you've probably thought of this one before...but I need a secure unidirectional optical data transfer mechanism. something like IrDA is inexpensive but nonstandard. hi res color displays and cameras are a hell of a lot more expensive but are standard on all consumer products. QR codes are great for print media but suck for video. You get the picture :)
ThomasV has quit [Ping timeout: 244 seconds]
<CodeShark>
would be nice to have a video codec capable of high throughput
<CodeShark>
but I haven't been able to find anything like that out there
<bramc>
We just use H.264, and generally as pass-through, because reencoding is bad.
<bramc>
Perhaps unsurprisingly, my focus is primarily on application-layer protocols.
shen_noe has quit [Read error: Connection reset by peer]
<CodeShark>
so you guys aren't really into codecs...darn :p
<CodeShark>
gmaxwell, weren't you into codecs?
p15x has joined #bitcoin-wizards
<CodeShark>
bramc: I've run into several apps that attempt to get around the QR 4k limit by using multiple QR codes either displayed together or flipbook style...but no matter how hard they try it always sucks :p
<bramc>
It's funny what expertise people disclaim. I claim to not be a cryptographer because I'm not as into fundamental algorithms as gmaxwell and djb. I also claim to not be a security person because I legitimately don't do that stuff because that shit is awful.
sparetire_ has quit [Quit: sparetire_]
<bramc>
Mostly I do stuff using basic crypto primitives on top of IP with maybe a little bit of math. Notably there recently has been the development of Bitcoin, DissidentX, and Riposte, which seems to indicate that a lot can be done with vanilla crypto.
<p15x>
isn't steganography just unworkable at a large scale because a dedicated attacker who can study wide use of it can write a detector for it?
<CodeShark>
one-way functions and cyclic groups with hard-to-invert representations are pretty much the mainstay of all public key crypto
<CodeShark>
but I'm really interested in learning more about lattice-based crypto
<bramc>
p15x, The point of DissidentX is to make stego much more like regular encryption in that the security is *mostly* in the key. There can still be detectors, but it's possible to iterate on the encoding side without having to rewrite the decoder every single time.
priidu has joined #bitcoin-wizards
<bramc>
CodeShark ECC turns out to be the most confidence-inspiring fundamental primitive (no controversy about that one in this crowd, but it was historically not viewed that way for political reasons)
<p15x>
can someone not write a detector that is flexible enough to spot the general encoding pattern even if there are variations on the design or is it not feasible?
<bramc>
p15x, That's a somewhat involved subject, but it's fair to say that a detector has basically no hope against an encoder who's able to look at the detector and iterate
<p15x>
so it's a back and forth battle with no conclusive winner?
<CodeShark>
the main advantage of EC as representations of cyclic groups is key length, no?
<bramc>
p15x, It's also possible for the encoder to hide data at any rate they choose, including an extremely low one, and mix and match encoding techniques, which likely makes things hopeless for the detector in the extreme cases
<bramc>
CodeShark Key length and generally being more confidence inspiring. RSA has all kinds of icky encoding gotchas.
<CodeShark>
right - RSA requires choosing two primes...and not all prime pairs are equal :)
<bramc>
p15x, Historically it's heavily favored the detecting side. With DissidentX the weight may shift the other way. It will never be 100% conclusive though.
<bramc>
p15x, Unfortunately at this point the mainstream of the stego community doesn't understand what DissidentX does. I think when I tried to talk to them they got the impression that I'm a crank
<bramc>
(not an unreasonable assumption for them, when someone outside your field says they have a result which requires that you rethink your whole worldview they're usually a crank)
<bramc>
CodeShark It's far worse than that. The rules around checking that fucking high order byte can get you every time.
<bramc>
There's a reason why ed25519 is a bijection
<bramc>
Anybody know if secpk is as well?
<CodeShark>
what do you mean it's a bijection?
<CodeShark>
every pubkey maps to a unique privkey and vice versa?
<bramc>
CodeShark Every byte array of the appropriate length is a valid public key. Same for private keys.
<CodeShark>
yes, secpk is also like that - the private key is bounded above by the group order
<bramc>
RSA implementers naturally want to do things like check the high order byte for validity as an optimization. Unfortunately it's fatal.
<bramc>
Best to not even give the implementers that bit of temptation.
<bramc>
ECC also lends itself much better to constant time implementation
<phantomcircuit>
<bramc> It's remarkable how senior a team has to be to keep a codebase from becoming a disaster over time
<phantomcircuit>
lold
<bramc>
And its best algorithms have a lot less unnervingly clever math trickery
<bramc>
CodeShark I'm fading at the moment and you just asked a question which requires swapping more stuff into my brain right now than I'm prepared to to answer it.
jtimon has joined #bitcoin-wizards
<CodeShark>
if you don't really care about signing performance, it's not too hard to get it to be constant order - the trick is writing nonbranching code :)
<CodeShark>
err, constant time
<CodeShark>
you can always do conditional register swaps and carry out "dummy" operations
<bramc>
That's a whole lot of temptation right there.
<gmaxwell>
zooko: I prodded him offlist about his recent posts on PHC.
erasmospunk has joined #bitcoin-wizards
kisspunch has left #bitcoin-wizards [#bitcoin-wizards]
Emcy_ has joined #bitcoin-wizards
Emcy_ has joined #bitcoin-wizards
erasmospunk has quit [Remote host closed the connection]
* zooko
looks at PHC
<gmaxwell>
I feel like PHC has gone to the bad side of sci.crypt in the 90s, lots of arm waving posts--- more opinion than science. :(
Emcy has quit [Ping timeout: 255 seconds]
dc17523be3 has quit [Ping timeout: 264 seconds]
orperelman has quit [Ping timeout: 246 seconds]
dc17523be3 has joined #bitcoin-wizards
<zooko>
I had no idea there was this great discussion of proof-of-RAM algorithms on the PHC list. Thanks.
<narwh4l>
It seems like his argument for momentum at least is that miners will simply ignore the optimal approach for Momentum
<narwh4l>
For some reason they will not be interested in the fact that the best performance comes from added memory?
akrmn1 has quit [Ping timeout: 244 seconds]
<narwh4l>
Doesn't seem like a completely reasonable assumption to me
<gmaxwell>
'performance' is a mistaken assumption on your part.
akrmn has joined #bitcoin-wizards
<gmaxwell>
participants care about cost. PHC participants make a (IMO not very strongly supported, but earnest) assumption that memory is very expensive.
justanotheruser has quit [Ping timeout: 255 seconds]
<bramc>
I'm puzzles as to what this paper does. You have their very special DAG construction, which results in a merkle root, and you... do what exactly with that root? I'm confused.
antgreen has joined #bitcoin-wizards
justanotheruser has joined #bitcoin-wizards
moa has joined #bitcoin-wizards
XXIII has joined #bitcoin-wizards
<bramc>
It seems like there's a challenge which gets issued, and the form of the challenge determines which paths up to the roots have to be revealed
dEBRUYNE has quit [Ping timeout: 248 seconds]
<bramc>
This seems sort of like the opposite of proofs of sequential work, because with those you're trying to avoid needing space, where in this case you're trying to show space has been used
AaronvanW has quit [Ping timeout: 246 seconds]
AnoAnon has joined #bitcoin-wizards
dc17523be3 has quit [Ping timeout: 240 seconds]
AnoAnon has quit [Max SendQ exceeded]
dc17523be3 has joined #bitcoin-wizards
DougieBot5000 has joined #bitcoin-wizards
bramc has quit [Quit: This computer has gone to sleep]
<amiller>
bramc,
<amiller>
er nvm
jgarzik has joined #bitcoin-wizards
jgarzik has quit [Changing host]
jgarzik has joined #bitcoin-wizards
ruby32 has joined #bitcoin-wizards
<amiller>
i don't understand how to concrete apply this cuckoo parallelization thing