sipa changed the topic of #bitcoin-wizards to: This channel is for discussing theoretical ideas with regard to cryptocurrencies, not about short-term Bitcoin development | http://bitcoin.ninja/ | This channel is logged. | For logs and more information, visit http://bitcoin.ninja
bramc has quit [Ping timeout: 260 seconds]
<amiller> i'd assume pretty much anything is possible once you start with anyone-can-spend
dnaleor has quit [Quit: Leaving]
<kanzure> great
MoALTz has quit [Quit: Leaving]
dnaleor has joined #bitcoin-wizards
dnaleor has quit [Remote host closed the connection]
ratbanebo has joined #bitcoin-wizards
tromp has joined #bitcoin-wizards
ratbanebo has quit [Ping timeout: 264 seconds]
dnaleor has joined #bitcoin-wizards
fibonacci has quit [Quit: Connection closed for inactivity]
oleganza has joined #bitcoin-wizards
vo8co has joined #bitcoin-wizards
kenshi84_ has joined #bitcoin-wizards
kenshi84 has quit [Ping timeout: 256 seconds]
oleganza has quit [Quit: oleganza]
ratbanebo has joined #bitcoin-wizards
bramc has joined #bitcoin-wizards
<bramc> amiller: Fair enough, in principle anyone can spend can be soft forked into the underlying transactions just being an accounting system with the authentication done by something new
<amiller> bramc, it's pretty tricky still, because the problem is that the *next* transaction has to refer to the previous one by its TXID, which necessarily includes its inputs
ratbanebo has quit [Ping timeout: 240 seconds]
<amiller> so the MULTIINPUT idea is what iddo came up with for getting around that in a soft-fork compatible way
<amiller> i *think* it is soft-fork compaitble
<bramc> amiller: But you could have the 'real' signing layer refer to transactions using a different id. That requires potentially two utxo databases, which would be annoying
<amiller> yeah, i thought for a while it was possible to do softfork for this without having to resort to building a whole new UTXO and somehow putting all anyonecanspend coins in thre
<bramc> amiller: MULTIINPUT fixes the one case, but it would be nice if inputs just plain didn't go into the txid. That seems to be the way you'd do it starting from scratch
<amiller> actually that is sort of an interesting thing, i don't think the "make an entire anyonecan spend special account" has been fleshed out as a general concept here
<amiller> sidechains is the only proposal that really does that i think
<bramc> sidechains doesn't include inputs in the txid/
<bramc> ?
<kanzure> http://elementsproject.org/ is where segwit came from
<gmaxwell> the idea of leaving out the @#$@ signatures from the hashes predates any of that, and I believe many people have independantly suggested that it would have been a better design over time.
<gmaxwell> In elements we made one such construction and worked through things like for engineering reasons you still need commitments to the signatures.
<bramc> gmaxwell: What do you mean by commitments to the signatures?
<gmaxwell> the block has to have a hash of the witnesses; or otherwise there is a trivial denial of service attack where you can feed a node malleated blocks with invalidated signatures. (and it's generally useful for accountablity to know what signature was actually used, when multiple administratively distinct signatures are possible)
<bramc> Oh of course that's needed
<bramc> But I wasn't 100% sure dropping the inputs is okay, sounds like it is
<sipa> dropping the _inputs_, not just the input scripts?
<sipa> elements does not drop the inputs
<sipa> that would enable replay attacks
<bramc> sipa: Salt in the transactions can prevent replay attacks
<gmaxwell> and then require a perpetually growing used salt database.
<bramc> If the salt's long enough choosing it at random should work fine
<gmaxwell> except for the unprunable perpetually growing salt database...
<bramc> Why do you need a salt database? Each transaction creator can make sure their own salt is fine by making it long enough and random
<sipa> if you don't have address reuse, it is possible
Burrito has quit [Quit: Leaving]
<bramc> Without address reuse you don't need the salt at all. It only applies in the case where, for example, someone's accepting donations, and then pays someone else for something, and that other person snarfs an identical-sized donation
<sipa> right
<bramc> And in that case both of the people doing donations can just choose long enough random salt and the problem goes away
<gmaxwell> it's very hard to reliably prevent all possible cases of reuse, due to concurrency and state (e.g. restarts.)
Ylbam has quit [Quit: Connection closed for inactivity]
<bramc> Maybe there's an attack where I send you a payment which is a duplicate of another in exchange for something, then get a payment back from you for something else and reclaim the original payment
<bramc> But then, a duplicate payment can be rejected for already being in the utxo set and won't go through until the old one is spent
<sipa> assuming there is only one per block
eck has joined #bitcoin-wizards
<eck> is this bitcoin lizards?
<bramc> sipa: It would be totally reasonable to check the utxo set and reject transactions which have outputs which are already in there
<sipa> bramc: yes, but the utxo set only has outputs from confirmed transactions
<sipa> you can't prevent multiple concurrent unconfirmed payments to the same address
<sipa> though i guess you introduce the concept of 'double pay' to the mempool as well, mimicking double spend prevention
<bramc> sipa: right but that attack only matters when multiple senders are intentionally using duplicate salt
<bramc> and yes it should be backed up in the mempool as well
<bramc> There could be a compression trick where 'unsalted' transactions make their salt by hashing the inputs, and the size of those bytes doesn't count to the block size
jjj has joined #bitcoin-wizards
<sipa> bramc: btw, as the discussion stopped afterwards, do you understand how TXO commitments can give you proof of unspentness?
Noldorin has quit [Quit: Textual IRC Client: www.textualapp.com]
<bramc> sipa: Uh, no, they only seem useful for proofs of spentness
<bramc> having STXO commitments can work but only if the STXO set format is one which allows for proofs of non-inclusion which append-only formats don't
tromp has quit [Remote host closed the connection]
<sipa> bramc: TXO commitments (as proposed by peter todd) are append only, but still mutable
skeuomorf has quit [Ping timeout: 240 seconds]
<sipa> so the spent entries are overwritten
<sipa> but there just is no rebalancing
<sipa> as there is no rebalancing, wallets can easily keep track of where their own outputs in that tree are
<bramc> Oh that, yeah, that seems to combine the worst of everything
<sipa> while full nodes can completely forget it (i.e., the have no database or UTXO set at all anymore)
<sipa> *they
<sipa> wallets provide a proof that their inputs are in the tree, with enough information for full nodes to recompute the root after adding outputs and overwriting inputs
<bramc> Not sure how that all works. Wallets wanting to prove unspentness of their own things still have to get updates to new roots
<sipa> sure, they need to see the blocks
<sipa> they can also outsource it
<sipa> and full nodes can keep a partial utxo tree for recently created outputs, so the proofs are only needed for spending old coins
<bramc> They can also outsource updating their unspent proofs in a much simpler format
<bramc> And full nodes can remember what the recent updates were in a simpler format as well
<sipa> i really like the idea of making the UTXO set size something that doesn't impact full node scalability anymore
<sipa> of course, it comes with very different costs elsewhere
<bramc> A simpler format gets all the same advantages
<sipa> what do you mean by simpler format?
<bramc> I mean having a simple utxo root like I proposed
<sipa> oh, there are certainly many alternatives to the commitment structure
<sipa> i still haven't read your proposal, and i plan to
<sipa> but this idea of moving the (U)TXO set out of full nodes changes the priorities
<bramc> My proposal is a really simple patricia trie with one performance trick thrown into the semantic definition and a bunch of tricks in the non-reference implementation to improe cache coherence
ratbanebo has joined #bitcoin-wizards
<bramc> Fully relying on that is potentially problematic because a wallet which has been asleep for a while still needs to find a truly full node to get its proofs
<bramc> But that issue is orthogonal to any issues of UTXO or TXO format, at least among the ones we're discussing. They all support it.
<sipa> Absolutely.
<sipa> But if for example no (or few) nodes will actually keep the full dataset around anymore, something like memory compactness becomes relatively unimportant.
<sipa> So I just wanted to make sure you understood the idea.
tromp has joined #bitcoin-wizards
<bramc> Yeah I understand the idea, but having the whole world rely on a relatively small number of nodes to actually keep everything around is alarming to me
<sipa> Nobody needs to keep everything.
<sipa> And the blockchain still exists, you only need to replay it and apply it to your own subset.
jjj is now known as jjj_f3hr
ratbanebo has quit [Ping timeout: 264 seconds]
<bramc> *somebody* needs to keep everything around, because wallets reanimating after years of being asleep and wanting to spend is an important use case
<sipa> well, sure, just replay old blocks
<sipa> dumb storage is easy
<bramc> That might be quite a bit of replaying
<sipa> indeed, but it's much simpler than fully validating history
<bramc> My thought is that it's possible to get away with doing everything the dumb way by using an efficient implementation
<sipa> my opinion is pretty much the opposite
<sipa> i think UTXO lookup/updating is already too slow, and it's only getting worse
<bramc> Any of these formats will allow a simple proof of validity of every block
<sipa> and any commitment structure that needs updating in real time will make it an order of magnitude worse
<bramc> What's involved in it currently?
pro has quit [Quit: Leaving]
<sipa> looking up an entry in the database, deleting it, and adding a new one
<sipa> those are batched for hundreds of blocks, and cached in memory
<sipa> and the occasionally flushed to disk
<bramc> Not knowing how these databases work it's hard for me to evaluate. This sounds very strange
<sipa> a disk lookup can be multiple milliseconds
<sipa> you're not going to do better than that times the number of inputs
<bramc> Well yes you want the utxo set to easily fit in memory
<sipa> it's several GB already
<sipa> if the whole thing is in memory, it's easy
<sipa> but for many devices, it already can't
<bramc> The inputs can be easily verified if they're hashes
<sipa> so we'd need consensus rules to curb the utxo set growth
<sipa> (which we probably should have anyway... but blah blah political drama)
<bramc> Well *for now* it all fits in memory easily on most full nodes and having nodes do the dumb thing before it's necessary to require the much more complicated thing would give lots of lead time
<sipa> it only fits in memory on dedicated machines, really
<sipa> the representation in memory is several times larger than the one on disk
<sipa> http://bitcoin.sipa.be/utxo_size.png <- i should redo this
<gmaxwell> (as the one on disk is compressed in a multitude of ways.)
<bramc> In the data structure I made the representation in memory is only slightly bigger than the list of hashes concatenated
NewLiberty has quit [Ping timeout: 246 seconds]
<sipa> but we'd pretty much need to replace the whole database with your representation?
<sipa> or can it deal with just having a subset loaded from disk?
<bramc> Not sure what you mean by 'a subset'
<sipa> right now, the full utxo set is only on disk
<bramc> or what exactly is currently stored on disk
<sipa> and there is a cache/write buffer in memory
<sipa> on disk there is LevelDB datbase with txid -> [list of unspent outputs] map
<sipa> i'm working on changing that to a [txid,vout] -> {unspent output} map
<sipa> and there is a cache in memory that keeps a subset of it around
<sipa> if you set the cache size very high, it can become effectively the entire set
<sipa> but we can't assume that you can just load the whole thing in memory
<bramc> I haven't implemented it as a cache
<sipa> most devices that run bitcoin core can't do that
<bramc> You could easily tune my thing so that the leaves were kept on disk, that would result in a block update being done as basically a single scan across the whole set
<sipa> that's why i like delayed commitments... they can be computed asynchronously, and can use a commitment structure that is independent from the database implementation
<sipa> just read through the whole database once per week, and compute its hash (by merkleizing the items on the fly)
<bramc> Commitments need to be delayed, it's just a question of how delayed
<sipa> if it's delayed enough, you don't need any efficient updating at all; just recompute the whole thing
<bramc> Yes that's the easy way, but there's no harm in using a format which makes incremental updates possible
<sipa> absolutely
<sipa> especially if it's something that is easily constructible from a sorted list of entries
<bramc> I need to implement on-the-fly computation of my format on a sorted list of entries, also need to mention that on list because it came up
<sipa> but at this point, my biggest uncertainty is whether we want UTXO commitments at all... or at least whether we want mandatory UTXO commitments at all, as they preclude any future change that removes the full utxo set from full nodes
<bramc> Well full nodes need the full utxo set to validate blocks
<sipa> not in a TXO+proofs model
<sipa> which i'll very readily admit is not nearly researched enough to consider for the short term
<bramc> You can also have utxo+proofs and it's a lot simpler. Apples to apples it's all the same data flows
<sipa> ah yes, if blocks are easily applicable to subsets of UTXOs in it
<sipa> agree!
<bramc> Proofs of blocks could be shipped around with blocks easily enough
<sipa> cool
<sipa> i'll read more about this
<sipa> it's been useful to bring it up :)
<bramc> How are nodes validating blocks currently?
<sipa> go through the inputs, find them in the database (with big caching layer in between), then build a batch of updates to apply to the database (which removes inputs and add outputs), iterate through the scriptsigs and validate them, and if they are all good, apply the batch to the cache (and perhaps trigger a flush to disk, if memory is low)
skeuomorf has joined #bitcoin-wizards
<bramc> Using my non-reference implementation with the leaves on disk should be able to utterly ream that in terms of performance
jjj_f3hr has quit [Quit: Page closed]
<sipa> how so?
<sipa> i don't see how a commiting version would not do any of the steps that are already being done
<bramc> It has a much better idea of exactly where everything is, there's semantic simplicity in knowing that you've just got a hash set
<sipa> LevelDB also knows exactly where everything is...
<bramc> hrm, fair enough, mostly after that it's just more compact
<sipa> why would it be more compact
<bramc> hashes are all the exact same size
<sipa> so? the current implementation has no hashes at alll
<sipa> and you still need the full utxos *somewhere*
<sipa> which are not constant size
<bramc> although if leveldb is really doing things right it should be about the same, but with the calculation of hash roots doing no harm
<bramc> Yeah I'm assuming the full utxos are somewhere else
<sipa> well, *all* the current time is spent dealing with the full utxos
<bramc> right the current world is basically proofless, so when there's a reference to a utxo you need to go look up what the pkscript is
<sipa> indeed
<sipa> so in my view (but i'm glad to be proven wrong) any commitment structure is going to be purely additive to what we're already doing
<sipa> and perhaps that extra is negligable or insignificant, but i'm skeptical about that
<bramc> In a better world whenever you get a block it comes with all the earlier pkscripts and a proof of inclusion of its inputs in the current utxo set
<sipa> yup
<bramc> This better world is annoying because it requires no lag on utxo commitments at all
<bramc> So it both enables and requires aggressive commitments
d9b4bef9 has quit [Remote host closed the connection]
legogris has quit [Remote host closed the connection]
legogris has joined #bitcoin-wizards
<bramc> There are different levels of functionality here. One is to allow pruning of old history, that can be accomplished with occasional commitments. Another is to allow proofs of validity of nodes. That second one really sucks
moli has quit [Read error: Connection reset by peer]
<bramc> Basically it can be done, but it requires a completely up to date root in every block, and peter todd's txo commitment structure doesn't change that requirement any
<bramc> So wallets are stuck asking for updates to proofs of their own utxos still being u from nodes, which are cheap on an ongoing basis but get expensive when the gap is long enough because they require either a fresh lookup or re-running everything
<bramc> On the plus side, each of those lookups is done by *one* node, and the proof is simply included in the block and verified by everyone else, so yeah it's a big win
Giszmo has quit [Quit: Leaving.]
ratbanebo has joined #bitcoin-wizards
<bramc> And merging together proofs is actually pretty easy - a wallet can give a proof of unspentness from a milestone a while back and the full node can merge in the update since then easily
<bramc> Those milestones don't even have to be baked into the format, they can be conventions of the peer protocol
<sipa> i believe peter todd has work on delayed txo commitments as well that merge proofs from multiple blocks, but i haven't read up on that either
jtimon has quit [Ping timeout: 268 seconds]
<bramc> He's doing all the same stuff in obfuscated form
<sipa> haha
ratbanebo has quit [Ping timeout: 256 seconds]
<bramc> There are some performance optimizations he's doing which might or might not help, and there are simpler ways of doing those same optimizations (simpler than what he's doing, more complicated than what I've built)
<bramc> But all that doesn't change the algorithmic structure: There's a commitment at every block which can be used to prove the very next block
teslax has quit [Ping timeout: 240 seconds]
<bramc> His approach does help a bit if you want servers to be able to rolling calculate that root without the benefit of the proofs. That doesn't seem particularly useful
<bramc> or rather, you can get that same benefit by making everything trail a bit
<bramc> Maybe it's possible to keep the trailing just a little bit and the constant proofs. Need to think about that a bit more.
<sipa> the insertion ordering is nice for those that need to keep subsets around, as the vast majority of spends are recent
<sipa> but perhaps the idea of having a structure that is not insertion-ordered but can be used for both utxo commitments and a full-nodes-utxo-less world is appealing enough to consider
<bramc> Insertion ordering is a constant factor optimization
<sipa> depends for what, and depends on usage pattern
<bramc> And the same effect can be had with a much more general purpose data structure
<bramc> Basically you make a trie which stores variable length strings and store the insertion numbers
<bramc> That gets you all the performance benefits of insertion ordering without something so bespoke
eck has left #bitcoin-wizards ["WeeChat 1.0.1"]
_whitelogger has joined #bitcoin-wizards
<bramc> The current UTXO set could probably be compacted down into less than half a gig in memory with an insertion ordered data structure. Problem being that it's all insertion numbers and you need to go from txids to insertion numbers
<sipa> eh, no
<bramc> no what?
<sipa> it's 1838178283 bytes, just serializing the entries back to back without any commitment structure
<bramc> How many entries?
<sipa> 46812974, from 18331102 different transactions
<bramc> At the extreme of aggression of compacting the transaction ordered set it's a bit field
<sipa> i'm confused
<sipa> just the txids of 18331102 transactions takes more than 500 MB
<bramc> Yeah that's the problem: The transaction ordered set isn't so useful without going from txid -> entry number. The txids by themselves are quite a bit bigger
<bramc> But the commitment structure could be made much smaller. You could trivially represent it all as a commitment to a bitfield
<bramc> Maybe that isn't a bad idea. Proofs of txid -> position are taken directly out of the block chain
<bramc> Then the math is: for about 50 million txids, adding commitments roughly doubles the size of storage, and the underlying storage is 50 million divided by 8 because it's only one bit per. That's about 12 megabytes.
<bramc> If the size of the utxo set isn't much less than 1/10 the size of the txo set it's going to be really, really hard to beat this data structure
Guest11920 has joined #bitcoin-wizards
tromp has quit [Remote host closed the connection]
<bramc> Presumably the txo set is much bigger but ever 120 megabytes wouldn't be a big deal
ratbanebo has joined #bitcoin-wizards
<sipa> there have been 206M transactions total
<sipa> over 91% have had all their outputs spent
TheSeven has quit [Disconnected by services]
[7] has joined #bitcoin-wizards
<bramc> How many outputs from those 206M transactions?
<sipa> i don't have the number for that
<bramc> One hacky but serviceable thing to do would be to start the set at a particular block and everything before that is positioned by its place in the utxo set at that time, to make it smaller
NewLiberty has joined #bitcoin-wizards
ratbanebo has quit [Ping timeout: 240 seconds]
<bramc> Then presto it's down to 12 megs today, and even if every block is full it's only growing at a rate of 30 megs/year
bramc has quit [Ping timeout: 260 seconds]
ahmedsfhtagn has joined #bitcoin-wizards
ahmedsfhtagn has quit [Max SendQ exceeded]
ahmedsfhtagn has joined #bitcoin-wizards
ahmedsfhtagn has quit [Max SendQ exceeded]
ahmedsfhtagn has joined #bitcoin-wizards
ahmedsfhtagn has quit [Max SendQ exceeded]
ahmedsfhtagn has joined #bitcoin-wizards
ahmedsfhtagn has quit [Max SendQ exceeded]
ahmedsfhtagn has joined #bitcoin-wizards
ahmedsfhtagn has quit [Max SendQ exceeded]
ahmedsfhtagn has joined #bitcoin-wizards
ahmedsfhtagn has quit [Remote host closed the connection]
ahmedsfhtagn has joined #bitcoin-wizards
ahmedsfhtagn has quit [Max SendQ exceeded]
tromp has joined #bitcoin-wizards
oleganza has joined #bitcoin-wizards
tromp has quit [Ping timeout: 260 seconds]
oleganza_ has joined #bitcoin-wizards
oleganza has quit [Ping timeout: 240 seconds]
oleganza_ is now known as oleganza
_whitelogger has joined #bitcoin-wizards
afk11 has joined #bitcoin-wizards
skang404 has joined #bitcoin-wizards
_whitelogger has joined #bitcoin-wizards
bildramer1 has joined #bitcoin-wizards
bildramer has quit [Ping timeout: 240 seconds]
ratbanebo has joined #bitcoin-wizards
oleganza has quit [Quit: oleganza]
ratbanebo has quit [Ping timeout: 260 seconds]
dodomojo has quit [Remote host closed the connection]
dodomojo has joined #bitcoin-wizards
afk11 has quit [Remote host closed the connection]
afk11 has joined #bitcoin-wizards
d9b4bef9 has joined #bitcoin-wizards
dodomojo has quit [Remote host closed the connection]
_whitelogger has joined #bitcoin-wizards
skang404 has quit [Remote host closed the connection]
afk11 has quit [Ping timeout: 240 seconds]
afk11 has joined #bitcoin-wizards
pedrovian_ has joined #bitcoin-wizards
pedrovian has quit [Ping timeout: 246 seconds]
ratbanebo has joined #bitcoin-wizards
ratbanebo has quit [Ping timeout: 240 seconds]
stiell has joined #bitcoin-wizards
tromp has joined #bitcoin-wizards
stiell has quit [Ping timeout: 246 seconds]
tromp has quit [Ping timeout: 240 seconds]
stiell has joined #bitcoin-wizards
Davasny has joined #bitcoin-wizards
Davasny is now known as Guest65068
Guest65068 has quit [Remote host closed the connection]
zaleth has joined #bitcoin-wizards
zaleth has left #bitcoin-wizards ["Closing Window"]
ratbanebo has joined #bitcoin-wizards
airbreather_ has joined #bitcoin-wizards
stiell has quit [Ping timeout: 256 seconds]
airbreather has quit [Ping timeout: 268 seconds]
kenshi84_ is now known as kenshi84
tromp has joined #bitcoin-wizards
stiell has joined #bitcoin-wizards
tromp has quit [Ping timeout: 256 seconds]
CubicEarth has quit [Remote host closed the connection]
stiell has quit [Ping timeout: 260 seconds]
stiell has joined #bitcoin-wizards
Pachurter has quit [Ping timeout: 260 seconds]
stiell has quit [Ping timeout: 256 seconds]
Pachurter has joined #bitcoin-wizards
stiell has joined #bitcoin-wizards
StardustX has joined #bitcoin-wizards
MoALTz has joined #bitcoin-wizards
stiell has quit [Ping timeout: 260 seconds]
Guyver2 has joined #bitcoin-wizards
WitnessProtectio has joined #bitcoin-wizards
StardustX has left #bitcoin-wizards ["Leaving"]
WitnessProtectio has quit [Quit: Leaving]
d9b4bef9 has quit [Remote host closed the connection]
d9b4bef9 has joined #bitcoin-wizards
arubi has quit [Remote host closed the connection]
arubi has joined #bitcoin-wizards
skeuomorf has quit [Ping timeout: 240 seconds]
dodomojo has joined #bitcoin-wizards
dodomojo has quit [Ping timeout: 246 seconds]
shesek has quit [Ping timeout: 246 seconds]
bildramer1 is now known as bildramer
Ylbam has joined #bitcoin-wizards
shesek has joined #bitcoin-wizards
_whitelogger has joined #bitcoin-wizards
moli_ has joined #bitcoin-wizards
AaronvanW has quit [Remote host closed the connection]
AaronvanW has joined #bitcoin-wizards
AaronvanW has quit [Ping timeout: 240 seconds]
afk11 has quit [Remote host closed the connection]
afk11 has joined #bitcoin-wizards
AaronvanW has joined #bitcoin-wizards
AaronvanW has joined #bitcoin-wizards
AaronvanW has quit [Changing host]
AaronvanW has quit [Remote host closed the connection]
AaronvanW has joined #bitcoin-wizards
<petertodd> bramc: your claim that TXO commitments requires an up-to-date commitment in each block is mistaken; they do not require a consensus protocol change at all: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013591.html - I also gave a talk on this at mit earlier this month: https://twitter.com/petertoddbtc/status/838517271668604929
<petertodd> bramc: secondly, your claim that TXO commitments requires a txid->index mapping to be kept is also mistaken, as it's only needed for interoperability between non-txo-commitment supporting nodes and upgraded nodes; it can be dropped if all your peers support txo commitments
AaronvanW has quit [Ping timeout: 260 seconds]
<petertodd> bramc: (you may want a mapping for reorgs of recent blocks, but that mapping only needs to store a small set of recently created txids)
airbreather_ is now known as airbreather
pro has joined #bitcoin-wizards
tromp has joined #bitcoin-wizards
Pachurter has quit []
dodomojo has joined #bitcoin-wizards
dodomojo has quit [Remote host closed the connection]
CubicEarth has joined #bitcoin-wizards
CubicEarth has quit [Remote host closed the connection]
CubicEarth has joined #bitcoin-wizards
skang404 has joined #bitcoin-wizards
hashtagg has joined #bitcoin-wizards
aalex has joined #bitcoin-wizards
hashtag_ has quit [Ping timeout: 240 seconds]
CubicEarth has quit [Remote host closed the connection]
arubi_ has joined #bitcoin-wizards
arubi has quit [Remote host closed the connection]
CubicEarth has joined #bitcoin-wizards
bliljerk101 has joined #bitcoin-wizards
aalex has quit [Ping timeout: 260 seconds]
skang404 has quit [Remote host closed the connection]
tromp has quit [Remote host closed the connection]
[7] has quit [Ping timeout: 260 seconds]
TheSeven has joined #bitcoin-wizards
Giszmo has joined #bitcoin-wizards
d9b4bef9 has quit [Remote host closed the connection]
d9b4bef9 has joined #bitcoin-wizards
bliljerk101 has quit []
s4z has joined #bitcoin-wizards
hashtag_ has joined #bitcoin-wizards
hashtagg has quit [Ping timeout: 240 seconds]
arubi_ is now known as arubi
s4z has quit [Remote host closed the connection]
Guest11920 has quit [Quit: EliteBNC free bnc service - http://elitebnc.org - be a part of the Elite!]
Guest39699 has joined #bitcoin-wizards
Guyver2 has quit [Quit: :)]
bramc has joined #bitcoin-wizards
<bramc> petertodd: I meant the data needs to be kept up to date, not that it needs to be in the data format. If the data is canonical then every peer can calculate it locally and proofs can be sent in a side channel, which is kind of neat
<bramc> petertodd: As for needing the txid -> position, the importance of it comes in when you receive a transaction which doesn't include position information, so you need to look it up. You can require a proof of position, which totally works, and is handy because the proofs of position basically never change except for extremely recent things which you can have cached
<bramc> petertodd: I said before that the optimization which TXO commitments allow which might really help is compacting down the set. Running the numbers on it, if you turn that up to 11 and make it a bitfield everything works very well and I actually like that proposal
tromp has joined #bitcoin-wizards
Giszmo has quit [Quit: Leaving.]
JHistone has joined #bitcoin-wizards
<bramc> Making the commitments canonical so they don't have to be put in the blockchain itself is kind of neat. It makes there be some hope that an extension can actually be adopted in the current political climate, because there's no blockchain extension needed
tromp has quit [Ping timeout: 240 seconds]
dnaleor has quit [Ping timeout: 240 seconds]
dnaleor has joined #bitcoin-wizards
AaronvanW has joined #bitcoin-wizards
Sosumi has joined #bitcoin-wizards
AaronvanW has quit [Ping timeout: 240 seconds]
AaronvanW has joined #bitcoin-wizards
NewLiberty has quit [Ping timeout: 246 seconds]
AaronvanW has quit [Ping timeout: 258 seconds]
<bramc> I'm actually excited about having a txo bitfield. It seems to really work
<sipa> what does the bitfield represent?
<bramc> sipa the bitfield is just a bitfield of which txos are still in the utxo set: 1 for included, 0 for spent
<bramc> indexed by position, obviously
<sipa> that's an ever growing structure
<sipa> presumably the ratio of txo vs utxo will increase over time
<bramc> Yes it's a txo set as peter todd proposed. You can compactify down the older parts as they get overrun with with zeros and hence can be compressed to be smaller
<sipa> i see
<bramc> And the constant factors are so favorable it's ridiculous
NewLiberty has joined #bitcoin-wizards
<bramc> Even without that compactification it's entirely practical. And it has the benefit that the proofs of position don't change after extremely short term reorgs so wallets can remember them forever. And it dramatically cuts down on database writes because the only real alterations are to spent status which are mostly done in memory
<bramc> Funny how a constant factor of 256 actually matters
tromp has joined #bitcoin-wizards
<sipa> hahaha
tromp has quit [Ping timeout: 256 seconds]
onabreak has quit [Ping timeout: 260 seconds]
Belkaar has quit [Ping timeout: 240 seconds]
CubicEarth has quit [Remote host closed the connection]
Belkaar has joined #bitcoin-wizards
Belkaar has joined #bitcoin-wizards
Belkaar has quit [Changing host]
AaronvanW has joined #bitcoin-wizards
Sosumi has quit [Quit: Bye]
AaronvanW has quit [Ping timeout: 256 seconds]
tromp has joined #bitcoin-wizards
tromp has quit [Ping timeout: 246 seconds]
ratbanebo has quit [Remote host closed the connection]
ratbanebo has joined #bitcoin-wizards
ratbanebo has quit [Ping timeout: 246 seconds]
AaronvanW has joined #bitcoin-wizards
AaronvanW has quit [Ping timeout: 260 seconds]
CubicEarth has joined #bitcoin-wizards
AaronvanW has joined #bitcoin-wizards
AaronvanW has quit [Ping timeout: 240 seconds]
ratbanebo has joined #bitcoin-wizards
ratbanebo has quit [Ping timeout: 240 seconds]
CubicEarth has quit [Remote host closed the connection]
AaronvanW has joined #bitcoin-wizards
AaronvanW has quit [Ping timeout: 240 seconds]
dnaleor has quit [Quit: Leaving]
MoALTz has quit [Quit: Leaving]
bramc has quit [Ping timeout: 260 seconds]
dnaleor has joined #bitcoin-wizards
dodomojo has joined #bitcoin-wizards
dodomojo has quit [Read error: Connection reset by peer]
dodomojo has joined #bitcoin-wizards
ratbanebo has joined #bitcoin-wizards
ratbanebo has quit [Ping timeout: 264 seconds]
bildramer has quit [Ping timeout: 246 seconds]
tromp has joined #bitcoin-wizards
bildramer has joined #bitcoin-wizards
tromp has quit [Ping timeout: 264 seconds]
wasi has quit [Remote host closed the connection]
wasi has joined #bitcoin-wizards
skeuomorf has joined #bitcoin-wizards
CubicEarth has joined #bitcoin-wizards
dnaleor has quit [Quit: Leaving]
ratbanebo has joined #bitcoin-wizards
skeuomorf has quit [Ping timeout: 258 seconds]
ratbanebo has quit [Ping timeout: 264 seconds]
dnaleor has joined #bitcoin-wizards
JHistone has quit [Ping timeout: 258 seconds]
AaronvanW has joined #bitcoin-wizards
AaronvanW has joined #bitcoin-wizards
AaronvanW has quit [Changing host]
JHistone has joined #bitcoin-wizards
marcoagner has joined #bitcoin-wizards