<tperson>
Sure, I can probably finish the the blob store to get to a point where it can be used.
<jbenet>
tperson: cool, what's needed on it beyond that? just putting it together?
<jbenet>
cc bengl
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<tperson>
Ya, I believe so. I don't know the complete idea behind putting it all together. I've looked at reginabox a bit, but not sure how the blob store fits into it. I'd assume that reginabox was picked because it uses a blob-store already?
<jbenet>
tperson: yeah bengl wrote reginabox and he fixed it up to use any blob-store
<jbenet>
so we should be able to just plug in the new thing.
<jbenet>
IIRC, the ipfs-blob-store was working a bit like patch? with one big root object that points to all top level keys?
<tperson>
oh excellent
<tperson>
correct
<tperson>
I have the dag-store which works like that
<jbenet>
tperson: nice, that sounds good. is the root object anything special or a big list?
<jbenet>
big list may be ok for n < 10,000
<jbenet>
(and for nothing where memory is a big concern)
<jbenet>
another option is to use a leveldb to map key -> ipfs root hash, but it's cheating a little bit.
<jbenet>
clever benchmark with >1M objects, with both sequential and random read/write workload. if sqlite comes remotely close to matching leveldb, i'll be impressed.
<kyledrake>
I'm not really recommending it, but haystack uses a single append-only volume file and then stores metadata in a separate index. Gets rid of the filesystem metadata. The consequence is that you need to "vacuum" the file to restore space. I wrote a version of this with 10 lines of ruby and it ran faster than my SSD on a single process.
<kyledrake>
rather, it bottlenecked the SSD.
<Tv`>
^ what i've been talking about as "arena storage"
<Tv`>
i don't think ipfs has enough of a need for it, currently
<clever>
a table with 600 rows and an index that happens to cover the group by field
<kyledrake>
The index I guess would be a multi-hash with a value of the seek position and length in the volume. The b-tree could only read a small amount of the beginning of the multihash and fit in memory for a lot of data.
<clever>
compiling the query took 3 disk reads, 2 headers and a single page
<clever>
and then running the query took a single page read (plus a header for locking reasons)
<clever>
in this case, the entire index fit in a single page, so its not a good example
<jbenet>
Tv` whyrusleeping was saying that he's hitting bottlenecks with current flatfs and wanted to move towards arena.
<clever>
let me check the other table
<Tv`>
jbenet: that sounds like good news!
<jbenet>
we could look into arena storage at some point in near future. let's probably land all the S3 stuff first.
<Tv`>
that means other parts have improved
<clever>
jbenet: 'select count(*) from logs' found ~30k rows, and oddly had to read 3mb!, 118 read calls
<clever>
but thats not a typical use case either
queue has joined #ipfs
queue has quit [Client Quit]
<clever>
jbenet: found something that should serve as a good sample, the git repo for linux
<clever>
extract every git object, and store em in sqlite
<clever>
remote: Counting objects: 4399663, done.
queue has joined #ipfs
Wallacoloo has left #ipfs [#ipfs]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
queue is now known as qqueue
<rschulman_>
jbenet: That did it, thanks. :)
www has joined #ipfs
<whyrusleeping>
jbenet: Tv` the bottleneck of flatfs i'm running into is lots of small writes, the cost of the multitude of syscalls is dwarfing the cost of the write
<Tv`>
whyrusleeping: linux?
<clever>
whyrusleeping: are they in bulk or spread out over time?
<Tv`>
whyrusleeping: i'd expect the cost is all the fsyncing it does to be safe; linux syscalls themselves are pretty darn fast'
<whyrusleeping>
Tv`: yeah, its the cost of the fsync i'm pretty sure
reit has quit [Ping timeout: 246 seconds]
<whyrusleeping>
clever: theyre close together
<Tv`>
whyrusleeping: btw your batch work is the one thing that allows working around that
<clever>
whyrusleeping: not sure how it would effect other parts of your program, but if you start a transaction in sqlite, all of your inserts/updates go into ram
<whyrusleeping>
yeah, i know
<clever>
and are written out in large chunks to save io and syscalls
<Tv`>
even arena storage, if asked to fsync every object, can save only the file creation part; batching multiple updates into one fsync is what gets the gains
<whyrusleeping>
although in certain situations its difficult to 'batch' the writes together at the application level
<clever>
yeah, i can see how that would be an issue
<clever>
in mysql/innodb, there is an option to never sync after a query, but instead do 1 sync per second
<Tv`>
aka "i didn't really like that data"
<clever>
that lets the batching happen automaticaly, but you may loose up to a second
<clever>
and it may land in the middle of an operation
<clever>
cross-table stuff becomes less atomic
tilgovi has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<tperson>
No idea, I was just searching for the ipfs-api on npm and ran across this atm-ipfs-api package.
<tperson>
Which appears to be a support lib for the `desktop` project.
<qqueue>
hey ipfriends, is there a way to increase the timeout on `ipfs ls`, or whatever makes it abort with `Error: context deadline exceeded`?
<qqueue>
Or more to the point I suppose, I am trying to run `ipfs ls QmQwR1UZTxDz2T8qdZBCC5qTvLZnnPrWp2LvtdkKUPhsyh` from one server and I know that hash exists on another server, yet it times out. How do I debug that?
rschulman_ has quit [Ping timeout: 256 seconds]
<jbenet>
qqueue we should have a way of adding a timeout. i think there's an issue somewhere, if not file one?
<jbenet>
something like --timout=<duration>
<jbenet>
on all commands
<jbenet>
whyrusleeping didnt we have something like that? could set it as a context on the request.
<whyrusleeping>
ive wanted that for so long
<qqueue>
A cursory search through github doesn't find any relevant issues, I'll make one
<whyrusleeping>
jbenet: working on gx, gonna make it a little nicer to use with symlinks
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
therealplato has quit [Changing host]
therealplato has joined #ipfs
void has joined #ipfs
<qqueue>
hmm, what's the expectation on being able to get objects from an arbitrary node, after the initial add? Is DHT announce synchronous? I'm just doing adds from my local machine and trying to get them from a vps.
<jbenet>
qqueue: it depends on whether the nat traversal is working: try getting them from a gateway
<jbenet>
(whyrusleeping: i think we may need to make some bitswap agents)
<whyrusleeping>
jbenet: bitswapagents??
<whyrusleeping>
:D
<qqueue>
jbenet: ok, gateway.ipfs.io picked up my object, but it's still timing out on my vps, which I would expect to be easier on nat traversal. Any relevant diagnostics I can pull for that?
<jbenet>
qqueue whats the uptime of your daemon? (we've noticed some weird states sometimes)
notduncansmith has quit [Read error: Connection reset by peer]
mdem has quit [Quit: Connection closed for inactivity]
void has quit [Quit: ChatZilla 0.9.91.1 [Firefox 39.0/20150629114848]]
<qqueue>
Okay, so when I try to `ipfs ls QmQwR1UZTxDz2T8qdZBCC5qTvLZnnPrWp2LvtdkKUPhsyh` (which, admittedly, is a stress test at 50k files), bsdash shows a little blip of the hash in "active requests", but then it immediately goes away (and ls times out 60 seconds later)
<qqueue>
huh, I also have an 100-file test folder (`for i in $(seq 0 100); do echo $i > file$i; done`), and it has the same behavior QmTphasniAEXGimPnvy7av1Xk8akQuRbRwPCdKF8pdg3BT
reit has joined #ipfs
<qqueue>
I _can_ do `ipfs QmTphasniAEXGimPnvy7av1Xk8akQuRbRwPCdKF8pdg3BT`, which downloads as expected. And, now `ipfs ls` also works as expected. Possibly some difference of behavior in ls that performs badly on large directories?
reit has quit [Ping timeout: 244 seconds]
<jbenet>
qqueue: ok so `ipfs ls` actually needs to get the files too, not just the dir, to show metadata.
<jbenet>
qqueue: we've discussed lifting metadata to the dir, and that make some sense for unix dirs, butgets harder to do when you have non-unix things.
<qqueue>
ok. I've been trying to get my 50k directory as well, but it keeps stopping at around 1.88 MB. That explains why ls times out at least
reit has joined #ipfs
<jbenet>
qqueue `ipfs get <root>` just times out?
<jbenet>
qqueue try adding it again? (im curious if it's a dht providing issue)
<qqueue>
yeah, `ipfs get QmQwR1UZTxDz2T8qdZBCC5qTvLZnnPrWp2LvtdkKUPhsyh`. I'll try re-adding it
shea256 has joined #ipfs
<qqueue>
readded, but it still freezes at 1.411 mb on my vps. for sanity, I did a get on the local machine, and it's 48.84MB in total
shea256 has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ogd>
are you sure you arent accidentally saving to your floppyd rive
<ogd>
they have a 1.4 mb capacity :)
<jbenet>
ogd: you troll
<jbenet>
qqueue that's pretty odd, can you show me your: ipfs swarm peers
<qqueue>
oh, my daemon died somehow, that explains why it stopped at 1.4mb...
<jbenet>
btw qqueue you said the dir had 50k files?
<spikebike>
Hmm, got a new 4k TV to replace a conference projector. Seems like IPFS might be a nice way to let 10 random laptops publish docs that show up on the system connected to the TV
<qqueue>
jbenet: ok, that's kind of what I suspected, hence the `for i in` test files.
<spikebike>
jbenet: fortunately the 4k tv is hooked up to a linux box. I was going to use chromecast streaming but that's only 720p
<qqueue>
Is there an existing bug I can report to? I do have around ~50k actual files I'd like to serve at some point.
<jbenet>
qqueue thanks for tis test case
<jbenet>
whyrusleeping we should make this a test bed test o/
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Tv`>
TestAllKeysRespectsContext is a funky, funky test
<Tv`>
i'm really not at all sure why there's a blockstore on top of a datastore
<alu>
I just tried void's software btw
<alu>
It's fucking SICK
<alu>
you can export from blender into a VR room that can automatically load up for you
<alu>
its like he added a multiplayer functionality for blender
<alu>
for collaborative editting
<alu>
He turned blender into the first collaborative online 3d modeling program
notduncansmith has joined #ipfs
<alu>
with IPFS and JanusVR
<alu>
This fucking changes the workflow completely
notduncansmith has quit [Read error: Connection reset by peer]
<alu>
Its executed so very well too!
<alu>
works on windows now too
<jbenet>
alu :)
<jbenet>
alu: that's great!
<alu>
Very cool stuff.
<alu>
working on a demo for NASA now
<jbenet>
Tv` blockstores deal with blocks -- it's a specific data structure with specific guarantees (hashes to the key). and there will be different implementations of them that do different things with that, for example, we're replacing the "blockservice" thing and making it a blockservice that uses bitswap. and we're making a "blockstore" that only stores to the
<jbenet>
local fs. -- it is definitely _similar_ to a datastore, but much more complicated because they may deal with the network, with our assumptions about Blocks, and with other IPFS specific stuff. (datastores are not ipfs specific at all)
<reit>
jbenet: instead of performing a dir fanout by going back to the DHT for each node, have you considered the idea of reading the directory structure directly from your connected peer(s) rather than going back to the DHT each time?
<reit>
that is to say, if you're connected to a peer based on the root hash of a large tree, there's a good chance that node may have information on the rest of the tree already
<jbenet>
reit: in _most_ cases you wont go back out to the DHT -- people you're already connected to have the blocks.
<jbenet>
reit: bitswap already takes advantage of this by sending out the wantlist to those peers
notduncansmith has quit [Read error: Connection reset by peer]
bigbluehat has joined #ipfs
<whyrusleeping>
sooooo, my phone died at 70% battery
<whyrusleeping>
wont turn back on
<whyrusleeping>
its really warm
<whyrusleeping>
and the backplate is pressing out to the point where i can get my fingernail under it
<whyrusleeping>
aka, the battery may explode
<whyrusleeping>
jbenet: o/
<jbenet>
Um that's not fi
<jbenet>
Fun*
<whyrusleeping>
nope
<whyrusleeping>
its in the kitchen in a big metal pot until tomorrow
<spikebike>
why people tolerate planned obsolence in the form of epoxied batteries in $500+ phones is beyond me
<whyrusleeping>
spikebike: this is actually just a defective model
<whyrusleeping>
my last phone had a removeable battery, and the battery lasted longer than i wanted to keep the phone
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
whyrusleeping: it's fairly common for batteries to die, swell, or hold only a small fraction of the original charge within the first 2 years.
<spikebike>
I placed my batteries on my g1, g2, and galaxy nexus. Had a nexus 4 battery die. I also a n5 that's significantly degraded since new.
<spikebike>
my nexus one battery lasted till I didn't want the phone anymore.
<spikebike>
note 4 is doing well, but only a year old and I can replace the battery.
<spikebike>
It also allows battery upgrades when someone overly cost optimizes the battery
<spikebike>
my gnex came with a 1750 which was complained about loudly by many, in other markets it came with a 2100 mah battery
<spikebike>
made a huge difference and cost me $20 for the upgrade.
<spikebike>
I don't mind replacing my phone every 2 years or less, but it's nice if the old phone is usable for friends/family who might need an upgrade
pfraze has quit [Remote host closed the connection]
nsh has quit [Ping timeout: 246 seconds]
nsh has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
fleeky has quit [Ping timeout: 256 seconds]
fleeky has joined #ipfs
reit has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
besenwesen_ has quit [Quit: ☠]
besenwesen has joined #ipfs
besenwesen has joined #ipfs
Leer10 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig>
cryptix: you still there ?
<zignig>
Luzifer: hai.
<cryptix>
zignig: re
<cryptix>
just had a really insteresting meeting with some ppl from hamburg that want to use ipfs for a bunch of projects (cc jbenet)
<zignig>
coolies , what kind of data ?
shea256 has joined #ipfs
<zignig>
I think I have a way to make an general ipfs proxy , my first test is going to be debian boot packages ( with astral boot ).
<cryptix>
zignig: maps and wiki-esque
<zignig>
nice , wiki-esque is a difficult one.
<zignig>
how to merge ? can you have two sources ?
<zignig>
or do you _have_ to have a single lineal source ?
shea256 has quit [Ping timeout: 256 seconds]
<zignig>
one of the cool things about a merkle dag is A + B + C == C + A + B
<zignig>
does not matter what order you add files in , the hash of the same files is the same hash.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix>
yup :) their a bit in prototyping stage right now - depending on how fast they want to move it could be that they use ipfs just to render out the static side and use ipfs deeper for the other project(s) - we will see but they like the concept a lot
<zignig>
it is cool, static render data is a good start. without pub/sub, name systems and web of trust ifps is static data (by defn.)
rschulman_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atomotic has joined #ipfs
Encrypt has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix>
sigh.. linux has forsaken me..
reit has joined #ipfs
hellertime has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
woahbot has quit [Ping timeout: 240 seconds]
alu has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has quit [Ping timeout: 265 seconds]
<cryptix>
whyrusleeping: you are running archlinux+zfs too, right? lemme now if can upgrade from kernel 4.0.7 to 4.1.2. i hit some panics on boot but i cant bother right now and it might be a fluke
<whyrusleeping>
man, if only i could speak german...
<cryptix>
'kernel panics during boot after update is the last thing i needed during this weather'
<whyrusleeping>
cryptix: i always uninstall zfs before kernel updates, and the reinstall afterwards
Encrypt has quit [Quit: Quitte]
<cryptix>
my rootfs isnt on zfs - worst thing to happen is that i dont get $HOME at login
<cryptix>
but this doesnt seem to be zfs related, its not in the stack traces
<cryptix>
no idea .. but too busy to mess with it right now...
<whyrusleeping>
interesting... my laptop (arch but no zfs) is on 4.1.2 and seems just fine
<whyrusleeping>
i'll let you know if i see anything weird
<cryptix>
<3
tilgovi has joined #ipfs
ebarch has quit [Quit: Gone]
therealplato has joined #ipfs
ebarch has joined #ipfs
shea256 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
simonv3 has joined #ipfs
Tv` has joined #ipfs
mildred has quit [Quit: Leaving.]
mdem has joined #ipfs
<rschulman_>
cryptix: zfs vs btrfs go
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix>
rschulman_: bsd interop is why i chose it :)
<cryptix>
i can piggyback a corp backup system this way for my stuff :)
<rschulman_>
ahhh, fair enough. :)
<cryptix>
can btrfs hotswap diks?
<cryptix>
in zfs if you have a mirrored pool and one disk is degrading, you can add a third and tell it to replace the faulty one while everything is online
ruby32 has joined #ipfs
<rschulman_>
cryptix: I think so
<rschulman_>
I haven't used either myself.
<rschulman_>
though I'd like to when I do a new computer soon.
<cryptix>
a friend of mine runs btrfs but he doesnt have dataloss paranoia like me.. :) (he doesnt care about snapshoting etc)
<Tv`>
cryptix: yes
<Tv`>
and btrfs snapshots are *really* great, but i still make an external backup based on them
<Tv`>
it's just a lot better than trying to backup a live system
<cryptix>
Tv`: to what? :) btrfs replacment?
<Tv`>
cryptix: hotswap
<cryptix>
disk* yea okay cool
<ReactorScram>
I want to use LVM snapshots for backup some day but my backup policy is pretty weak
<ReactorScram>
I imagine FS-level snapshots are better
<Tv`>
it was a long time ago, but i have pretty bad experiences of lvm snapshots
<cryptix>
yup i just send my snapshots file offsite - they are feed back into a zfs pool (so i can access and roll back there) and i save the monthly snapshot files seperatly on a disk that i store somewhere save
<Tv`>
currently i run attic snapshots off the snapshots: deduplicated, encrypted, remote over ssh
<Tv`>
*backups off the snapshots
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<boreq>
Can I have 2 questions? 1. I assume that it is better to switch to a better hashing function while implementing kademlia (the papers use 160 bit IDs so that implies SHA-1 despite the fact that it is not explicitly mentioned) and I think that SHA-256 is enough, do you agree? 2. Do you consider Multihash a good idea when it comes to sending it through the network? The length varies which means that you have to actively check yet another variable-l
<whyrusleeping>
it doesnt look like his daemon is online at the moment
<alu>
howd you check
<alu>
im still learning all the commands
<whyrusleeping>
i searched the network for his peer ID
<alu>
this was published like 15 hours ago, anyone try it?
atomotic has joined #ipfs
tilgovi has quit [Ping timeout: 244 seconds]
patcon has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
M-Eric has joined #ipfs
<M-Eric>
jbenet: do you have any drafts already in flight on what you'd want from additional unixfs metadata, or pointers to issues, etc?
<whyrusleeping>
alu: that was being discussed in the windows issue
<whyrusleeping>
i think it was experimental and only worked on an older windows version
<M-Eric>
i was chatting with some coreos folks recently and they mentioned desires for a similar thing -- a standard metadata spec, e.g. for feeding into hashing -- and i also have an arbitrary/adhoc implementation of same already for similar reasons, seems like maybe we could all benefit from drafting something together?
<spikebike>
metadata is pretty hairy. Everytime I think I have it handled (file permissions), then there's devices, hard and soft links, various timestamps, ACLs, etc.
<alu>
Gotcha, and yeah I subscribed to the windows issue
shea256 has quit [Remote host closed the connection]
opn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<M-Eric>
that captures nearly everything... except hardlinks, which are super tricky to conceptualize how to handle correctly
<M-Eric>
i think ACLs fit under xattrs...? but i'm not actually sure, i don't use them
<M-Eric>
but yeah, i definitely headdesk'd as i discovered things about the various timestamps :)
<jbenet>
cc wking metadata discussion o/
<jbenet>
M-Eric we have a catchall Metadata object that will keep all the relevant unix attributes
<jbenet>
It has a merkle ptr to the target object so only need to hash that.
shea256 has joined #ipfs
<jbenet>
alu windows is not officially supported yet (it works but the UX is definitely bad, we don't have prebuilt binaries yet or installer or anything)
<alu>
do any of you play fighting games
<alu>
EVO 2015 starts today
<alu>
also whyrusleeping linked me an exe that worked :D
www has quit [Ping timeout: 260 seconds]
shea256 has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<M-Eric>
jbenet, wking: go-ipfs/unixfs/* appears to be the place to look, correct?
<M-Eric>
i'm not sure i understand everything in unixfs.proto, can you maybe point me to some docs describing the meaning blocksizes? is the Data field actually a merkle pointer?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<M-Eric>
if the current metadata has components that are relevant to ipfs chunking, do you think it would be possible to shuffle them elsewhere, so that we could come out with a completely transport independent thing?
<M-Eric>
e.g. i'd like to hash over the metadata and have it describe files at rest regardless of chunking algo, i'm not sure if that currently would be the case?
amiller- is now known as amiller
amiller is now known as Guest23520
Guest23520 has joined #ipfs
Guest23520 has quit [Changing host]
Guest23520 is now known as amiller
shea256 has quit [Remote host closed the connection]
opn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
opn has joined #ipfs
patcon has quit [Quit: Leaving]
shea256 has joined #ipfs
patcon has joined #ipfs
<jbenet>
M-Eric the metadata object points to another file. the Data field is actual raw data. metadata does not impact chunking. its possible to add the whole hash of the file "at rest" to the metadata instead of the chunks, but we don't do that. is there a strong reason for needing this?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<M-Eric>
i might have to think more about this before saying a confident 'yes', but the temptation has occurred to me, anyway
<M-Eric>
so, for example, if i want to build a system that speaks ipfs hashes as the lingua franca for data identity, but then actually resides on disk as a tarball instead of chunks in a database (because that has other desirable properties for some reason, like detachability and export to another system that speaks tar but can't host an ipfs server for ~business reasons~)... it'd be really awesome if the hash doesn't care about that st
tilgovi has joined #ipfs
<M-Eric>
but i'm not sure what the most reasonable way to go about that would be, exactly (or indeed if it's actually reasonable; maybe i should give up, and those situations would have different hashes, and there's no mapping between them except to do the transform and suck it up).
<jbenet>
whyrusleeping: is pinbot happier now? it seems unhappy.
<whyrusleeping>
i'm fairly certain that pinbot just doesnt like jbenet
<whyrusleeping>
i pinned something this morning, around 30MB, and it worked just fine
<jbenet>
M-Eric: it's a tradeoff for sure. there are some benefits to the whole file in one hash (this is what people "usually have done" so they're more used to it). so it may be relevant to include. but the system itself (at least ipfs) doesnt _need_ it.
<whyrusleeping>
jbenet: why do we have utp vendored still?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Luzifer>
whyrusleeping: cool! :) Building mail templates makes me wanting to wash my hands with acid… Its so messy… But if it looks good it's worth it :)
<whyrusleeping>
yeah, i wouldnt be upset to receive that
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
kachow!
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed s3-0.4.0 from f06caf3 to 2dc3ab7: http://git.io/vmgZu
<ipfsbot>
go-ipfs/s3-0.4.0 6181904 Tommi Virtanen: gofmt generated assets...
<ipfsbot>
go-ipfs/s3-0.4.0 9492cc9 Tommi Virtanen: Remove dead code...
<ipfsbot>
go-ipfs/s3-0.4.0 71d9018 Tommi Virtanen: core tests: Stop assuming internals of Config...
<whyrusleeping>
Tv`: that tommi virtanen guy pushed broken tests :P
<Tv`>
that guy is a jerk
<whyrusleeping>
but its all better now
<bret>
would not having a swap space cause ipfs to crash?
<jbenet>
Luzifer: i would add a gopher with an engineer hat or something. follow travis or circleci templates?
<jbenet>
bret: yeah maybe. cc whyrusleeping
<Tv`>
whyrusleeping: actually, not sure how; i'm comparing my old branch and what you just pushed, and the only difference is in the assets..
<bret>
i just realized my raspi2 ipfs node didn't have a swapfile
<Luzifer>
jbenet: still searching for someone with talent for graphics not wanting $249 for a logo… *looking at 99designs*
<whyrusleeping>
yeah, the tests for the assets were failing
<whyrusleeping>
it actually was not your fault
<whyrusleeping>
other than your ran go generate
<whyrusleeping>
and the go generate command was wrong
<Tv`>
whyrusleeping: oh, a time bomb
<whyrusleeping>
yeap
<Tv`>
very nice
<whyrusleeping>
you just happened to be the poor soul that set it off
<whyrusleeping>
:P
<Tv`>
whyrusleeping: especially helpful that go-bindata changed its output format too, so comparing wasn't simple
<Tv`>
one of the reasons i like becky: it's easy to bundle in the repo, so the output doesn't change accidentally
Leer10 has quit [Ping timeout: 250 seconds]
<whyrusleeping>
Tv`: let the record show that i was never against becky
hamstercups has left #ipfs ["Leaving"]
<border>
but you where pretty raging in PV :p
* border
wait his kick
Leer10 has joined #ipfs
<Luzifer>
gn8 everyone
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
Luzifer: gnite! hopefully you get to sleep the entire time ;)
uhhyeahbret has quit [Remote host closed the connection]
<Luzifer>
whyrusleeping: thanks! I hope so too. Last night of work before 3 weeks of vacation... In that time only calls from the fire dept can wake me up... ;)
shea256 has quit [Remote host closed the connection]
G-Ray has quit [Quit: Konversation terminated!]
uhhyeahbret has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mdem has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
jbenet: diff a.json b.json
<whyrusleeping>
uhm
<jbenet>
...
<whyrusleeping>
jbenet: for some reason i cant copy paste a link
<whyrusleeping>
but i can copy paste everything else...