whyrusleeping changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- code of conduct at https://github.com/ipfs/community/blob/master/code-of-conduct.md -- sprints + work org at https://github.com/ipfs/pm/ -- community info at https://github.com/ipfs/community/
<lgierth> probably pulls in all of ipfs although we only wanna speak to the api
arhuaco_ has quit [Remote host closed the connection]
<lgierth> or so
<lgierth> going to bed, i'll be off tomorrow because i'm on the train / on a wedding
<lgierth> works so far, goes OOM during docker build on pluto though
<jbenet> sounds good!
<jbenet> hmmm yeah build on pluto is not happy.
<jbenet> whyrusleeping is there an example on publishing a package with gx and using it elsewhere?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias> is gx ready for primetime whyrusleeping ?
<lgierth> gxgxgxgxgxgx \o/
<lgierth> zzz
<jbenet> sleep well lgierth! o/
<whyrusleeping> daviddias: gx is getting closer to primetime every time someone tries it and gives me feedback
<whyrusleeping> jbenet: re gx example, i can make one
<whyrusleeping> the readme kinda goes over it
<whyrusleeping> but doesnt specify how that works for go
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> jbenet: could you take a look at this commit? https://github.com/ipfs/go-ipfs/commit/acb11c736c9b5dd98e4101d8d07521d50bcbf70f
<whyrusleeping> or rather, the one i just pushed because i forgot to add a godeps thing
<ipfsbot> [go-ipfs] whyrusleeping force-pushed s3-0.4.0 from acb11c7 to 2457d9c: http://git.io/vmgZu
<ipfsbot> go-ipfs/s3-0.4.0 2457d9c Jeromy: fixup datastore interfaces...
<jbenet> whyrusleeping i think it looks ok, anything in particular to worry about? (the repo setup looks ok to me)
shea256 has quit [Remote host closed the connection]
patcon has quit [Ping timeout: 264 seconds]
<jbenet> tperson: wanna hack on https://github.com/ipfs/notes/issues/2 ? we may be able to do some damage to it
<tperson> Sure, I can probably finish the the blob store to get to a point where it can be used.
<jbenet> tperson: cool, what's needed on it beyond that? just putting it together?
<jbenet> cc bengl
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<tperson> Ya, I believe so. I don't know the complete idea behind putting it all together. I've looked at reginabox a bit, but not sure how the blob store fits into it. I'd assume that reginabox was picked because it uses a blob-store already?
<jbenet> tperson: yeah bengl wrote reginabox and he fixed it up to use any blob-store
<jbenet> so we should be able to just plug in the new thing.
<jbenet> IIRC, the ipfs-blob-store was working a bit like patch? with one big root object that points to all top level keys?
<tperson> oh excellent
<tperson> correct
<tperson> I have the dag-store which works like that
<jbenet> tperson: nice, that sounds good. is the root object anything special or a big list?
<jbenet> big list may be ok for n < 10,000
<jbenet> (and for nothing where memory is a big concern)
<jbenet> another option is to use a leveldb to map key -> ipfs root hash, but it's cheating a little bit.
<jbenet> put(key, val) -> levledb.put(k, ipfs.add(val))
<tperson> Ya, currently it's a large list. There are a few different ways we can implement it. Kind of depends on what we need.
<jbenet> it's nice to have one root hash that means everything
<tperson> One was to allow keys in a directory struchure, which would build the dag object much like ipfs unixfs
<jbenet> we dont do any nice fanout yet though
<jbenet> yeah you mean with fanout?
<jbenet> like many objects representing the root one?
<tperson> There would still be a root hash
<tperson> You would traverse the root hash walking the links to find the spot to store the data.
<jbenet> yeah makes sense
<tperson> The value would then be stored in the `Data` attribute. Though that limits the size.
<jbenet> like the git fs fanout: abcdefgh.. -> ab/cd/efgh...
<tperson> So the alternate approch is to mix data and struchure links.
<tperson> ya
<tperson> Or sort of, like that
keroberos has joined #ipfs
<tperson> blob-stores need to be able to provides keys
<jbenet> actually, tv's pinning thing does something like this
<tperson> So we can't simply fan out the hash.
<jbenet> so like list them out?
<clever> jbenet: pretty sure the fanout in git is purely to deal with crappy filesystems
<clever> FS's that barf when you get 3000 files in the same dir
<tperson> We could hash the key and do a fan out.
<jbenet> clever yep
rschulman_ has joined #ipfs
<Tv`> clever: not just filesystems; try ls
<clever> yeah, ls will sort by default, so it wont give a single hit until the entire list has been read
<clever> ive also noticed some weird issues in the kernel level, i cant even ctrl+c it until the 1st row has returned
<tperson> I guess to get the blob-store to a good point we need to figure out the best datastruchure for it.
shea256 has joined #ipfs
<clever> i would just go with sqlite, simple enough to setup
<tperson> We'll we want it to run on ipfs :)
<clever> throw an index on the hash column, done
<clever> ah
<clever> i was thinking more about ipfs's internal storage
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<tperson> ah yes, this is separate from ipfs the implementation.
<clever> if i remember correctly, sqlite stores large blob values seperately from the rest of the row
<clever> so all the rows are tightly packed with any other metadata, then the bulk storage is elsewhere
<jbenet> clever: it still would be terrible perf. even leveldb is bad. we need to do our own storage there.
<jbenet> tperson: abstract-blob-store doesn't need to _list_ keys, right? just find them?
<clever> jbenet: i have done some high level profiling of sqlite, and it can go pretty fast if you know how to debug it
<tperson> Correct
<jbenet> clever: benchmark it against leveldb, boltdb and raw fd.
<clever> jbenet: what kind of performance issues have you noticed in sqlite?
<jbenet> fs*
<rschulman_> how do I clear up the mount points if ipfs daemon doesn't shut down gracefully?
<clever> never used leveldb directly, never heard of boltdb, and raw FS's would have to deal with variable between OS's
<clever> and even what the user has chosen on linux
shea256 has quit [Ping timeout: 240 seconds]
<jbenet> can probably use what's in https://github.com/ipfs/go-ipfs/blob/dev0.4.0/pin/
Wallacoloo has joined #ipfs
<alu> bacon sushi
<alu> woops wron gchan
<clever> jbenet: let me find my profiling tools
<jbenet> clever benchmark with >1M objects, with both sequential and random read/write workload. if sqlite comes remotely close to matching leveldb, i'll be impressed.
<tperson> alu: sounds delicious /cc whyrusleeping
ralphtheninja has joined #ipfs
<kyledrake> I'm not really recommending it, but haystack uses a single append-only volume file and then stores metadata in a separate index. Gets rid of the filesystem metadata. The consequence is that you need to "vacuum" the file to restore space. I wrote a version of this with 10 lines of ruby and it ran faster than my SSD on a single process.
<kyledrake> rather, it bottlenecked the SSD.
<Tv`> ^ what i've been talking about as "arena storage"
<clever> jbenet: http://pastebin.com/rbvMSgBA a simple example
<Tv`> i don't think ipfs has enough of a need for it, currently
<clever> a table with 600 rows and an index that happens to cover the group by field
<kyledrake> The index I guess would be a multi-hash with a value of the seek position and length in the volume. The b-tree could only read a small amount of the beginning of the multihash and fit in memory for a lot of data.
<clever> compiling the query took 3 disk reads, 2 headers and a single page
<clever> and then running the query took a single page read (plus a header for locking reasons)
<clever> in this case, the entire index fit in a single page, so its not a good example
<jbenet> Tv` whyrusleeping was saying that he's hitting bottlenecks with current flatfs and wanted to move towards arena.
<clever> let me check the other table
<Tv`> jbenet: that sounds like good news!
<jbenet> we could look into arena storage at some point in near future. let's probably land all the S3 stuff first.
<Tv`> that means other parts have improved
<clever> jbenet: 'select count(*) from logs' found ~30k rows, and oddly had to read 3mb!, 118 read calls
<clever> but thats not a typical use case either
queue has joined #ipfs
queue has quit [Client Quit]
<clever> jbenet: found something that should serve as a good sample, the git repo for linux
<clever> extract every git object, and store em in sqlite
<clever> remote: Counting objects: 4399663, done.
queue has joined #ipfs
Wallacoloo has left #ipfs [#ipfs]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
queue is now known as qqueue
<rschulman_> jbenet: That did it, thanks. :)
www has joined #ipfs
<whyrusleeping> jbenet: Tv` the bottleneck of flatfs i'm running into is lots of small writes, the cost of the multitude of syscalls is dwarfing the cost of the write
<Tv`> whyrusleeping: linux?
<clever> whyrusleeping: are they in bulk or spread out over time?
<Tv`> whyrusleeping: i'd expect the cost is all the fsyncing it does to be safe; linux syscalls themselves are pretty darn fast'
<whyrusleeping> Tv`: yeah, its the cost of the fsync i'm pretty sure
reit has quit [Ping timeout: 246 seconds]
<whyrusleeping> clever: theyre close together
<Tv`> whyrusleeping: btw your batch work is the one thing that allows working around that
<clever> whyrusleeping: not sure how it would effect other parts of your program, but if you start a transaction in sqlite, all of your inserts/updates go into ram
<whyrusleeping> yeah, i know
<clever> and are written out in large chunks to save io and syscalls
<Tv`> even arena storage, if asked to fsync every object, can save only the file creation part; batching multiple updates into one fsync is what gets the gains
<whyrusleeping> although in certain situations its difficult to 'batch' the writes together at the application level
<clever> yeah, i can see how that would be an issue
<clever> in mysql/innodb, there is an option to never sync after a query, but instead do 1 sync per second
<Tv`> aka "i didn't really like that data"
<clever> that lets the batching happen automaticaly, but you may loose up to a second
<clever> and it may land in the middle of an operation
<clever> cross-table stuff becomes less atomic
tilgovi has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<prosodyContext> tperson: is that like streamium?
<tperson> No idea, I was just searching for the ipfs-api on npm and ran across this atm-ipfs-api package.
<tperson> Which appears to be a support lib for the `desktop` project.
<qqueue> hey ipfriends, is there a way to increase the timeout on `ipfs ls`, or whatever makes it abort with `Error: context deadline exceeded`?
<qqueue> Or more to the point I suppose, I am trying to run `ipfs ls QmQwR1UZTxDz2T8qdZBCC5qTvLZnnPrWp2LvtdkKUPhsyh` from one server and I know that hash exists on another server, yet it times out. How do I debug that?
rschulman_ has quit [Ping timeout: 256 seconds]
<jbenet> qqueue we should have a way of adding a timeout. i think there's an issue somewhere, if not file one?
<jbenet> something like --timout=<duration>
<jbenet> on all commands
<jbenet> whyrusleeping didnt we have something like that? could set it as a context on the request.
<whyrusleeping> ive wanted that for so long
<qqueue> A cursory search through github doesn't find any relevant issues, I'll make one
<qqueue> oh, I was searching on ipfs/ipfs, thanks
www has quit [Ping timeout: 255 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> qqueue: no worries!
<whyrusleeping> ping us on that issue :)
alu has quit [Quit: WeeChat 1.3-dev]
<jbenet> ralphtheninja thanks for slump -- https://github.com/jbenet/node-random-json-stream
woahbot has quit [Ping timeout: 248 seconds]
domanic has quit [Ping timeout: 244 seconds]
alu has joined #ipfs
woahbot has joined #ipfs
<whyrusleeping> jbenet: working on gx, gonna make it a little nicer to use with symlinks
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
therealplato has quit [Changing host]
therealplato has joined #ipfs
void has joined #ipfs
<qqueue> hmm, what's the expectation on being able to get objects from an arbitrary node, after the initial add? Is DHT announce synchronous? I'm just doing adds from my local machine and trying to get them from a vps.
<jbenet> qqueue: it depends on whether the nat traversal is working: try getting them from a gateway
<jbenet> (whyrusleeping: i think we may need to make some bitswap agents)
<whyrusleeping> jbenet: bitswapagents??
<whyrusleeping> :D
<qqueue> jbenet: ok, gateway.ipfs.io picked up my object, but it's still timing out on my vps, which I would expect to be easier on nat traversal. Any relevant diagnostics I can pull for that?
<jbenet> qqueue whats the uptime of your daemon? (we've noticed some weird states sometimes)
<whyrusleeping> 'ps -eo pid,cmd,etime | grep ipfs'
<qqueue> ~an hour
<jbenet> qqueue actually, do you hame npm installed on your vps? if so, try: npm install -g bsdash && bsdash
<qqueue> oh, 01:55:33 two hours
<jbenet> qqueue you should see something like this in the gif here: https://github.com/jbenet/node-bsdash
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mdem has quit [Quit: Connection closed for inactivity]
void has quit [Quit: ChatZilla 0.9.91.1 [Firefox 39.0/20150629114848]]
<qqueue> Okay, so when I try to `ipfs ls QmQwR1UZTxDz2T8qdZBCC5qTvLZnnPrWp2LvtdkKUPhsyh` (which, admittedly, is a stress test at 50k files), bsdash shows a little blip of the hash in "active requests", but then it immediately goes away (and ls times out 60 seconds later)
<qqueue> huh, I also have an 100-file test folder (`for i in $(seq 0 100); do echo $i > file$i; done`), and it has the same behavior QmTphasniAEXGimPnvy7av1Xk8akQuRbRwPCdKF8pdg3BT
reit has joined #ipfs
<qqueue> I _can_ do `ipfs QmTphasniAEXGimPnvy7av1Xk8akQuRbRwPCdKF8pdg3BT`, which downloads as expected. And, now `ipfs ls` also works as expected. Possibly some difference of behavior in ls that performs badly on large directories?
reit has quit [Ping timeout: 244 seconds]
<jbenet> qqueue: ok so `ipfs ls` actually needs to get the files too, not just the dir, to show metadata.
<jbenet> qqueue: we've discussed lifting metadata to the dir, and that make some sense for unix dirs, butgets harder to do when you have non-unix things.
<qqueue> ok. I've been trying to get my 50k directory as well, but it keeps stopping at around 1.88 MB. That explains why ls times out at least
reit has joined #ipfs
<jbenet> qqueue `ipfs get <root>` just times out?
<jbenet> qqueue try adding it again? (im curious if it's a dht providing issue)
<qqueue> yeah, `ipfs get QmQwR1UZTxDz2T8qdZBCC5qTvLZnnPrWp2LvtdkKUPhsyh`. I'll try re-adding it
shea256 has joined #ipfs
<qqueue> readded, but it still freezes at 1.411 mb on my vps. for sanity, I did a get on the local machine, and it's 48.84MB in total
shea256 has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ogd> are you sure you arent accidentally saving to your floppyd rive
<ogd> they have a 1.4 mb capacity :)
<jbenet> ogd: you troll
<jbenet> qqueue that's pretty odd, can you show me your: ipfs swarm peers
<qqueue> oh, my daemon died somehow, that explains why it stopped at 1.4mb...
<jbenet> btw qqueue you said the dir had 50k files?
<spikebike> Hmm, got a new 4k TV to replace a conference projector. Seems like IPFS might be a nice way to let 10 random laptops publish docs that show up on the system connected to the TV
<jbenet> spikebike: yep! try it with the electron-app: github.com/ipfs/electron-app for ease of installing
<jbenet> qqueue we dont yet have dir fanout, so this is definitely stressing things -- cc whyrusleeping
<qqueue> jbenet: ok, that's kind of what I suspected, hence the `for i in` test files.
<spikebike> jbenet: fortunately the 4k tv is hooked up to a linux box. I was going to use chromecast streaming but that's only 720p
<qqueue> Is there an existing bug I can report to? I do have around ~50k actual files I'd like to serve at some point.
<jbenet> qqueue thanks for tis test case
<jbenet> whyrusleeping we should make this a test bed test o/
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Tv`> TestAllKeysRespectsContext is a funky, funky test
<Tv`> i'm really not at all sure why there's a blockstore on top of a datastore
<alu> I just tried void's software btw
<alu> It's fucking SICK
<alu> you can export from blender into a VR room that can automatically load up for you
<alu> its like he added a multiplayer functionality for blender
<alu> for collaborative editting
<alu> He turned blender into the first collaborative online 3d modeling program
notduncansmith has joined #ipfs
<alu> with IPFS and JanusVR
<alu> This fucking changes the workflow completely
notduncansmith has quit [Read error: Connection reset by peer]
<alu> Its executed so very well too!
<alu> works on windows now too
<jbenet> alu :)
<jbenet> alu: that's great!
<alu> Very cool stuff.
<alu> working on a demo for NASA now
<jbenet> Tv` blockstores deal with blocks -- it's a specific data structure with specific guarantees (hashes to the key). and there will be different implementations of them that do different things with that, for example, we're replacing the "blockservice" thing and making it a blockservice that uses bitswap. and we're making a "blockstore" that only stores to the
<jbenet> local fs. -- it is definitely _similar_ to a datastore, but much more complicated because they may deal with the network, with our assumptions about Blocks, and with other IPFS specific stuff. (datastores are not ipfs specific at all)
<reit> jbenet: instead of performing a dir fanout by going back to the DHT for each node, have you considered the idea of reading the directory structure directly from your connected peer(s) rather than going back to the DHT each time?
<reit> that is to say, if you're connected to a peer based on the root hash of a large tree, there's a good chance that node may have information on the rest of the tree already
<jbenet> reit: in _most_ cases you wont go back out to the DHT -- people you're already connected to have the blocks.
<jbenet> reit: bitswap already takes advantage of this by sending out the wantlist to those peers
<reit> ah it's already like that then, cool stuff
<jbenet> reit: and https://github.com/ipfs/notes/issues/12 will allow expressing it even better
<Tv`> jbenet: lots of words but the code really is only hindered by that
<qqueue> is there any more discussion on this 'dir fanout' idea? I don't see any issues mentioning it in github
<jbenet> Tv` i'm always open to clear examples. show _why_ and i'll consider it.
<jbenet> but note i have zero patience left today for explaining the virtues of abstraction.
<jbenet> i'll just start pasting turing's original unversality paper
<jbenet> qqueue i think wking knows that best
<qqueue> thanks, I'll read up on it
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
shea256 has quit [Ping timeout: 244 seconds]
<jbenet> this is pretty sweet https://github.com/alexpmorris/dipfs
<jbenet> whyrusleeping: benefit of gx -- forces us to make it all very good.
tilgovi has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sharky has quit [Ping timeout: 264 seconds]
hellertime has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sharky has joined #ipfs
kyledrake_ has joined #ipfs
zorun_ has joined #ipfs
barnacs_ has joined #ipfs
wking_ has joined #ipfs
kyledrake has quit [Ping timeout: 252 seconds]
zorun has quit [Ping timeout: 252 seconds]
rubiojr has quit [Ping timeout: 252 seconds]
hosh has quit [Ping timeout: 252 seconds]
bigbluehat has quit [Ping timeout: 252 seconds]
barnacs has quit [Ping timeout: 252 seconds]
wking has quit [Ping timeout: 252 seconds]
hosh_ has joined #ipfs
kyledrake_ is now known as kyledrake
rubiojr has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
bigbluehat has joined #ipfs
<whyrusleeping> sooooo, my phone died at 70% battery
<whyrusleeping> wont turn back on
<whyrusleeping> its really warm
<whyrusleeping> and the backplate is pressing out to the point where i can get my fingernail under it
<whyrusleeping> aka, the battery may explode
<whyrusleeping> jbenet: o/
<jbenet> Um that's not fi
<jbenet> Fun*
<whyrusleeping> nope
<whyrusleeping> its in the kitchen in a big metal pot until tomorrow
<spikebike> why people tolerate planned obsolence in the form of epoxied batteries in $500+ phones is beyond me
<whyrusleeping> spikebike: this is actually just a defective model
<whyrusleeping> my last phone had a removeable battery, and the battery lasted longer than i wanted to keep the phone
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike> whyrusleeping: it's fairly common for batteries to die, swell, or hold only a small fraction of the original charge within the first 2 years.
<spikebike> I placed my batteries on my g1, g2, and galaxy nexus. Had a nexus 4 battery die. I also a n5 that's significantly degraded since new.
<spikebike> my nexus one battery lasted till I didn't want the phone anymore.
<spikebike> note 4 is doing well, but only a year old and I can replace the battery.
<spikebike> It also allows battery upgrades when someone overly cost optimizes the battery
<spikebike> my gnex came with a 1750 which was complained about loudly by many, in other markets it came with a 2100 mah battery
<spikebike> made a huge difference and cost me $20 for the upgrade.
<spikebike> I don't mind replacing my phone every 2 years or less, but it's nice if the old phone is usable for friends/family who might need an upgrade
pfraze has quit [Remote host closed the connection]
nsh has quit [Ping timeout: 246 seconds]
nsh has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
fleeky has quit [Ping timeout: 256 seconds]
fleeky has joined #ipfs
reit has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
besenwesen_ has quit [Quit: ☠]
besenwesen has joined #ipfs
besenwesen has joined #ipfs
Leer10 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> whyrusleeping: will the pot be enough? https://www.youtube.com/watch?v=7-xPHopebiE
mdem has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sbruce has quit [Ping timeout: 255 seconds]
kbala has quit [Quit: Connection closed for inactivity]
dbolser has quit [Ping timeout: 264 seconds]
sbruce has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
zabirauf has joined #ipfs
jez0990_ has joined #ipfs
jez0990 has quit [Ping timeout: 244 seconds]
<kyledrake> Note to self, never put phone in pocket.
keroberos has quit [Max SendQ exceeded]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
keroberos has joined #ipfs
sbruce has quit [Ping timeout: 264 seconds]
sbruce has joined #ipfs
nsh has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
silotis has quit [Ping timeout: 246 seconds]
sbruce has quit [Ping timeout: 248 seconds]
silotis has joined #ipfs
nsh has joined #ipfs
Taek has quit [Quit: No Ping reply in 180 seconds.]
sbruce has joined #ipfs
Taek has joined #ipfs
nsh has quit [Ping timeout: 246 seconds]
Taek has quit [Client Quit]
nsh has joined #ipfs
Taek has joined #ipfs
mildred has joined #ipfs
<ralphtheninja> jbenet: cool :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mdem has quit [Quit: Connection closed for inactivity]
mildred has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
simonv3 has quit [Quit: Connection closed for inactivity]
sbruce has quit [Ping timeout: 256 seconds]
sbruce has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
Tv` has quit [Quit: Connection closed for inactivity]
zabirauf has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
arthurvr has joined #ipfs
arthurvr has left #ipfs [#ipfs]
nsh has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Taek has quit [Quit: No Ping reply in 180 seconds.]
Taek has joined #ipfs
nsh has joined #ipfs
<cryptix> gday everybody
mildred has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
atomotic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
nsh has quit [Ping timeout: 246 seconds]
mildred has joined #ipfs
nsh has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
mildred has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rschulman_ has joined #ipfs
<Luzifer> ohai
atomotic has quit [Quit: Textual IRC Client: www.textualapp.com]
domanic has joined #ipfs
rschulman_ has quit [Ping timeout: 240 seconds]
mildred has joined #ipfs
www has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
roj has joined #ipfs
pfraze has joined #ipfs
roj has quit [Quit: Lost terminal]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fd0> wow, asciinema is awesome: https://asciinema.org/a/23554
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig> cryptix: you still there ?
<zignig> Luzifer: hai.
<cryptix> zignig: re
<cryptix> just had a really insteresting meeting with some ppl from hamburg that want to use ipfs for a bunch of projects (cc jbenet)
<zignig> coolies , what kind of data ?
shea256 has joined #ipfs
<zignig> I think I have a way to make an general ipfs proxy , my first test is going to be debian boot packages ( with astral boot ).
<cryptix> zignig: maps and wiki-esque
<zignig> nice , wiki-esque is a difficult one.
<zignig> how to merge ? can you have two sources ?
<zignig> or do you _have_ to have a single lineal source ?
shea256 has quit [Ping timeout: 256 seconds]
<zignig> one of the cool things about a merkle dag is A + B + C == C + A + B
<zignig> does not matter what order you add files in , the hash of the same files is the same hash.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> yup :) their a bit in prototyping stage right now - depending on how fast they want to move it could be that they use ipfs just to render out the static side and use ipfs deeper for the other project(s) - we will see but they like the concept a lot
<zignig> it is cool, static render data is a good start. without pub/sub, name systems and web of trust ifps is static data (by defn.)
rschulman_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atomotic has joined #ipfs
Encrypt has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> sigh.. linux has forsaken me..
reit has joined #ipfs
hellertime has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
woahbot has quit [Ping timeout: 240 seconds]
alu has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has quit [Ping timeout: 265 seconds]
<cryptix> whyrusleeping: you are running archlinux+zfs too, right? lemme now if can upgrade from kernel 4.0.7 to 4.1.2. i hit some panics on boot but i cant bother right now and it might be a fluke
<whyrusleeping> man, if only i could speak german...
<cryptix> 'kernel panics during boot after update is the last thing i needed during this weather'
<whyrusleeping> cryptix: i always uninstall zfs before kernel updates, and the reinstall afterwards
Encrypt has quit [Quit: Quitte]
<cryptix> my rootfs isnt on zfs - worst thing to happen is that i dont get $HOME at login
<cryptix> but this doesnt seem to be zfs related, its not in the stack traces
<cryptix> no idea .. but too busy to mess with it right now...
<whyrusleeping> interesting... my laptop (arch but no zfs) is on 4.1.2 and seems just fine
<whyrusleeping> i'll let you know if i see anything weird
<cryptix> <3
tilgovi has joined #ipfs
ebarch has quit [Quit: Gone]
therealplato has joined #ipfs
ebarch has joined #ipfs
shea256 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
simonv3 has joined #ipfs
Tv` has joined #ipfs
mildred has quit [Quit: Leaving.]
mdem has joined #ipfs
<rschulman_> cryptix: zfs vs btrfs go
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> rschulman_: bsd interop is why i chose it :)
<cryptix> i can piggyback a corp backup system this way for my stuff :)
<rschulman_> ahhh, fair enough. :)
<cryptix> can btrfs hotswap diks?
<cryptix> in zfs if you have a mirrored pool and one disk is degrading, you can add a third and tell it to replace the faulty one while everything is online
ruby32 has joined #ipfs
<rschulman_> cryptix: I think so
<rschulman_> I haven't used either myself.
<rschulman_> though I'd like to when I do a new computer soon.
<cryptix> a friend of mine runs btrfs but he doesnt have dataloss paranoia like me.. :) (he doesnt care about snapshoting etc)
<Tv`> cryptix: yes
<Tv`> and btrfs snapshots are *really* great, but i still make an external backup based on them
<Tv`> it's just a lot better than trying to backup a live system
<cryptix> Tv`: to what? :) btrfs replacment?
<Tv`> cryptix: hotswap
<cryptix> disk* yea okay cool
<ReactorScram> I want to use LVM snapshots for backup some day but my backup policy is pretty weak
<ReactorScram> I imagine FS-level snapshots are better
<Tv`> it was a long time ago, but i have pretty bad experiences of lvm snapshots
<cryptix> yup i just send my snapshots file offsite - they are feed back into a zfs pool (so i can access and roll back there) and i save the monthly snapshot files seperatly on a disk that i store somewhere save
<Tv`> currently i run attic snapshots off the snapshots: deduplicated, encrypted, remote over ssh
<Tv`> *backups off the snapshots
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atomotic has quit [Quit: Textual IRC Client: www.textualapp.com]
<whyrusleeping> Tv`: do you use raid parity at all?
<whyrusleeping> i hear a lot of people saying just to do mirrored stripes
Taek has quit [Quit: No Ping reply in 180 seconds.]
<Tv`> whyrusleeping: yeah, mirrored stripes
Taek has joined #ipfs
<Tv`> not sure of the current state of btrfs raid5/6, but when i built the setup that wasn't ready
<whyrusleeping> ah, i had heard really good things about zfs's raidz6
<Tv`> and i'm ok with simple & stupid replication (btrfs raid1 is already a bunch smarter than normal raid1)
<Tv`> i have no interest in being held at gunpoint by oracle
<whyrusleeping> but ZoL isnt run by oracle?
<Xe> would ipfs be a decent choice for image data storage?
<whyrusleeping> Xe: are these images youre intending to share?
<Xe> whyrusleeping: they'd be uploads for an image booru clone
<Tv`> whyrusleeping: ZoL still suffers from the ZFS licensing, and can never be a real part of a linux distribution
<whyrusleeping> Xe: then yeah, i think ipfs would be a good way to store and distribute those
<whyrusleeping> Tv`: hrm... i guess thats fair
<Xe> whyrusleeping: what's the API look like from Go programs?
<Xe> like say I have a byteslice and want to get it in ipfs
<whyrusleeping> Xe: i've made this to interact with a daemon running locally: https://github.com/ipfs/go-ipfs-shell
<Xe> mm
<Xe> where's the getting started guide again?
<whyrusleeping> install notes are above the getting started section
<Xe> okay, getting it started from CoreOS then
<whyrusleeping> coreOS has go installed already, right?
<Xe> CoreOS has you put everything inside containers
<whyrusleeping> ah, right
<Xe> its base system doesn't even have a C compiler
<whyrusleeping> we had a nice guide to running ipfs in docker here yesterday...
<whyrusleeping> let me find it
<whyrusleeping> i dont think its been published yet, but heres the file: https://github.com/ipfs/blog/blob/master/src/1-run-ipfs-on-docker/index.md
<whyrusleeping> although, i cant say that i've ever actually used coreOS. so let us know if anything is different in that regard
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Xe> okay
<Xe> small problem
<Xe> port 4001 is already in use
<Xe> will changing this break things?
<cryptix> nope - just default
<cryptix> you advertise yourself with the actuall adress to other peers
<cryptix> iirc you can even use /tcp/0 and it will pick a random port on each startup
<whyrusleeping> Xe: yeah, it handles you changing and configuring all the addresses and ports just fine
<Xe> oh nice
<Xe> it also appears to work behind a nat firewall
<whyrusleeping> yeap, we sunk a rather large amount of work into getting stupid nats to work
<whyrusleeping> and now we all hate nats
<Xe> is there a guide to set up a discrete cluster?
<whyrusleeping> discrete as in, not connected to other peers outside your network?
<Xe> yeah
<Xe> also is directory listing outside the scope of ipfs?
<whyrusleeping> remove all the bootstrap nodes in your configs, and set them to the other nodes on your network
<whyrusleeping> and then also make sure your firewall blocks nodes from dialing into your network
<whyrusleeping> directory listing? 'ipfs ls <hash>' will list a directory in ipfs
<Xe> hmm
<Xe> how do I make a directory?
<whyrusleeping> a few different ways, you could add an existing directory
<whyrusleeping> or you could use the 'ipfs object patch' and 'ipfs object new' to craft one by hand
tilgovi has quit [Remote host closed the connection]
<Xe> ah
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Xe> hmm
<Xe> is there a way to see the list of files you uploaded?
<Xe> or are they lost once they are in the hive?
<Xe> !pins
<Xe> !pin QmTLJY2yMZ3YiQpYWpHt93cPmEADJRMEXc6NLzXAc2rcPn
<ReactorScram> I think you can list everything you've pinned but that will include some things other people put up
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
bitemyapp has quit [Ping timeout: 244 seconds]
<Xe> hmm
<Xe> does ipfs let you delete things?
<Luzifer> I'm on vacation!!!! (Yeah okay 15h on-call left and then but yay!)
domanic has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
<whyrusleeping> Xe 'ipfs pin ls --type=all'
<Xe> hmm
<Xe> so if I upload a big file to ipfs
<Xe> how does the network balance things out?
<whyrusleeping> you don't really 'upload' files to ipfs, you add them to your node, which makes them available on the network for others to download
<Xe> ah
<Xe> i seem to be getting a lot of other files pinned from other nodes
<Xe> indirectly even
<ReactorScram> Xe: There's not a delete per se but if nobody is hosting a file, it will be unavailable
<ReactorScram> just like torrents being seeded
<Xe> hmm
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> Xe: not every hash is a file
<Xe> what are the non-file hashes?
<whyrusleeping> and we seed the readme and welcome docs when you init
<whyrusleeping> so, files are split into multiple blocks when you add them
<whyrusleeping> generally, the recursive pins correspond to files
<Xe> ah
zigguratt has joined #ipfs
zigguratt has left #ipfs [#ipfs]
<Xe> this is interesting
<rschulman_> whyrusleeping: v. important question: how can I get an IPFS sticker for my laptop?
shea256 has quit [Remote host closed the connection]
<whyrusleeping> uhm, you can PM me your mailing address
shea256 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
alusion has joined #ipfs
<alusion> hey is ipfs installable on windows
simonv3 has quit [Quit: Connection closed for inactivity]
shea256 has quit [Remote host closed the connection]
<ReactorScram> alusion: Yeah last I heard it runs on Windows although the FUSE mount probably does not work
<ReactorScram> I was testing a game for Windows that used IPFS and almost had it working
<whyrusleeping> ^ accurate as far as I know
<ReactorScram> I will revive that project some day... osmeday
* whyrusleeping should probably get a windows VM sometime...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<alusion> okay any special instructions needed?
<alusion> I am helping a friend out
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to s3-0.4.0: http://git.io/vmKFW
<ipfsbot> go-ipfs/s3-0.4.0 f06caf3 Jeromy: comments from CR...
<whyrusleeping> alusion: it *should* just work... but we have a thread here with all of our windows adventures to date: https://github.com/ipfs/go-ipfs/issues/959
JasonWoof has quit [Ping timeout: 256 seconds]
shea256 has joined #ipfs
JasonWoof has joined #ipfs
JasonWoof has quit [Changing host]
JasonWoof has joined #ipfs
<alusion> Guys this is not user friendly at all lmao
<whyrusleeping> alusion: hrm?
<alusion> he got this error
<whyrusleeping> oh, he doesnt have go set up correctly
<whyrusleeping> have him set the GOPATH variable to somewhere (maybe homedir/gopkg)
<whyrusleeping> and then rerun the command
shea256 has quit [Ping timeout: 248 seconds]
<alusion> export on windows?
<alusion> lol
<whyrusleeping> uhm... i've no idea how windows shell works tbh
patcon has joined #ipfs
<Xe> whyrusleeping: can I get one too? I can come over to SF
shea256 has joined #ipfs
<whyrusleeping> Xe: i'm in seattle right now, but can probably ship some to the SF area
<Xe> ah
<whyrusleeping> although
<alusion> gopath not set hmm
<whyrusleeping> some ipfsers may be in the bay area at some point, and will very likely have stickers
<whyrusleeping> alusion: that looks like a lot of effort to set an env var...
<whyrusleeping> we need to figure this out better for windows
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
shea256 has quit [Ping timeout: 248 seconds]
<domanic> jbenet, hey, does ipfs have mutable references in like you where talking about?
<whyrusleeping> domanic: we have ipns
<whyrusleeping> its not 'finished' yet, but you can still play around with it
<domanic> is that the mutable reference thing?
<whyrusleeping> yeah
<whyrusleeping> it allows you to have a mutable pointer to static content
<domanic> whyrusleeping, can you link me to code and docs? i don't see a ipns repo
<alusion> hehhe... ipns
<domanic> ha
shea256 has joined #ipfs
<whyrusleeping> domanic: its part of ipfs and the go-ipfs tool
<whyrusleeping> you can publish with 'ipfs name publish <hash>'
<whyrusleeping> and similarly resolve an entry with 'ipfs name resolve <hash>'
<whyrusleeping> ^ an example/tutorial i wrote a little while back
<whyrusleeping> hopefully its still up to date
mildred has joined #ipfs
shea256 has quit [Ping timeout: 244 seconds]
shea256 has joined #ipfs
opn has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<alusion> dude
<alusion> why isnt there an ipfs exe
<whyrusleeping> not sure why there isnt a windows one there
<whyrusleeping> Luzifer: ping
<Luzifer> whyrusleeping: because of missing dependencies not in Godeps.json
<Luzifer> s/not in/in/
<alusion> lemme see
domanic has quit [Ping timeout: 260 seconds]
<alusion> Luzifer i dont get it
<whyrusleeping> oooooh, great. windwos go deps
<Luzifer> alusion: was just an answer to question of whyrusleeping why gobuilder isn't building for windows
<whyrusleeping> alusion: i'll try and get a build for you
<alusion> im helping a friend over mumble to get it installed on windows
<alusion> theres no way I can do this for every 3d modeler
<alusion> 3D Modeling people aren't programmers lol
<alusion> thanks whyrusleeping
<alusion> that'd be SUPER useful
<alusion> especially after the tool that void released
<alusion> we have a lot of new interest in IPFS from windows users and artists
<boreq> I have to tell you, IPFS is the best thing since command line pipes.
<whyrusleeping> !pin QmWHsJtxX6kqqHBbbVgqgMURP4dRBHB75Zf9JBhwQ7UUg8
<pinbot> now pinning QmWHsJtxX6kqqHBbbVgqgMURP4dRBHB75Zf9JBhwQ7UUg8
alusion is now known as alu
alu has quit [Changing host]
alu has joined #ipfs
<pinbot> pin QmWHsJtxX6kqqHBbbVgqgMURP4dRBHB75Zf9JBhwQ7UUg8 successful!
<whyrusleeping> you can verify the binary there with my signature through keybase
<whyrusleeping> username is whyrusleeping
therealplato1 has joined #ipfs
<whyrusleeping> hrm, i probably should have signed the hash, not the file
therealplato has quit [Read error: Connection reset by peer]
<whyrusleeping> i figured keybase would be smart enough
<daviddias> !pin QmVB9vMoJMua3JxSyUBsrXTWty65muAoeEBeHFGfrNVF8M
<whyrusleeping> hey pinbot
opn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<daviddias> !pin
<whyrusleeping> !friends
<pinbot> my friends are: whyrusleeping jbenet tperson krl kyledrake zignig
<daviddias> !friends
<pinbot> my friends are: whyrusleeping jbenet tperson krl kyledrake zignig
<whyrusleeping> ah, youre not a friend
<whyrusleeping> i'll fix that
<daviddias> !can i be your friend?
<whyrusleeping> !botsnack
<pinbot> om nom nom
opn has joined #ipfs
<alu> I think it's working whyrusleeping
<whyrusleeping> alu: wooo!
<boreq> that really cheered me up, will implement in my bot whyrusleeping
<alu> it initialized
<whyrusleeping> alu: whats the peer ID?
<alu> lol uh lemme see
<alu> whats the command he type for that
<whyrusleeping> it should have printed it out, but you can type 'ipfs id'
opn has quit [Client Quit]
<alu> !g how to copy from cmd
<alu> fuck
woahbot has joined #ipfs
<alu> !g how to copy from cmd
<woahbot> alu: https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/windows_dos_copy.mspx - Open Command Prompt Right-click the title bar of the command prompt window, point to Edit, and then click Mark. Click the beginning of the text you want to copy.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<alu> yeah
<alu> here's his peer ID
<alu> QmfMKWyHmWGQ1LbnfpwwpMhAhxdXAeGho3t4UwwMP9gfCR
<whyrusleeping> cool cool.
<alu> does it work
<boreq> Can I have 2 questions? 1. I assume that it is better to switch to a better hashing function while implementing kademlia (the papers use 160 bit IDs so that implies SHA-1 despite the fact that it is not explicitly mentioned) and I think that SHA-256 is enough, do you agree? 2. Do you consider Multihash a good idea when it comes to sending it through the network? The length varies which means that you have to actively check yet another variable-l
<whyrusleeping> it doesnt look like his daemon is online at the moment
<alu> howd you check
<alu> im still learning all the commands
<whyrusleeping> i searched the network for his peer ID
<whyrusleeping> 'ipfs ping QmfMKWyHmWGQ1LbnfpwwpMhAhxdXAeGho3t4UwwMP9gfCR'
<whyrusleeping> will do a DHT lookup
<whyrusleeping> and then try and talk to the peer
<whyrusleeping> 'ipfs dht findpeer QmfMKWyHmWGQ1LbnfpwwpMhAhxdXAeGho3t4UwwMP9gfCR' also works
<whyrusleeping> and prints out the address we can see him from
<whyrusleeping> it looks like his daemon may be up, but his NAT is preventing incoming dials
<whyrusleeping> (also, we probably don't have good NAT traversal on windows)
<alu> I see
<alu> Yeah he added a file
<alu> ipfs add hh.jpg
<alu> added QmeC7hewzaSZDx4SEEjvkfsedTDQxk4sBoHWqUz3aLv1zs hh.jpg
* whyrusleeping afk for a bit
shea256 has quit [Remote host closed the connection]
shea256 has joined #ipfs
kord has joined #ipfs
<Xe> hmm
<Xe> If I wanted to have nodes automatically sync new files uploaded to an ipfs grid, what would I have to do?
opn has joined #ipfs
domanic has joined #ipfs
notduncansmith has joined #ipfs
<alu> okay
notduncansmith has quit [Read error: Connection reset by peer]
<alu> We're attempting to server a file over ipfs from windows now
<domanic> whyrusleeping, ipfs ${name} publish ${hash} ?
<whyrusleeping> domanic: name is a subcommand
<jbenet> Xe for now until we impl more things, try nsq or some other pub/sub thing. (Etcd would work too)
<domanic> whyrusleeping, ah then i am confused, how do you choose/know the name?
<alu> Hmm strange
<alu> when he types in ipfs id he gets null addresses
<whyrusleeping> domanic: names are based on ipfs keys, and currently the only 'name' you can use is your nodes peer ID
<whyrusleeping> so the command defaults to using that
<alu> sec
<whyrusleeping> we are working on fixing that, so you can generate more names to use
<alu> his daemon isnt on
<whyrusleeping> alu: lol, i was just about to ask that
<domanic> whyrusleeping, riiight so you get 1 homepage
<whyrusleeping> domanic: yeap, for now.
<whyrusleeping> domanic: you can also map dns records to ipfs objects too
<alu> wooo we okay he has addresses okay hes gunna add a file and try to serve it over the gateway
<whyrusleeping> see the TXT records on ipfs.git.sexy
<alu> QmeC7hewzaSZDx4SEEjvkfsedTDQxk4sBoHWqUz3aLv1zs hh.jpg
<whyrusleeping> alu: i see him providing it
<whyrusleeping> but i cant connect to him because of his NAT
<alu> but you cant also see it right
<alu> yeah
<ogd> what do you call a comparison of two cryptographic bitfields?
<ogd> a whitfield diffie! bwahahaha
<whyrusleeping> ogd: . . .
<rschulman_> /kickban ogd
<rschulman_> :)
kbala has joined #ipfs
<jbenet> ogd that's hilarious
<alu> what command did you use to see him providing the file whyrusleeping ?
<ogd> i do think we should call any group of encrypted bytes a 'whitfield'
<rschulman_> I've met Whit Diffie... he might actually appreciat that.
<whyrusleeping> alu: 'ipfs dht findprovs -v <hash of file>'
tilgovi has joined #ipfs
<alu> What do you think he should do about NAT btw?
<cSmith> portforward
<whyrusleeping> alu: he could port forward 4001 to his machine
<alu> He forwarded 4001
www has quit [Ping timeout: 256 seconds]
<whyrusleeping> alu: hrmmm
<alu> lemme check if he did that right
<alu> He said should be yeah
<whyrusleeping> alu: whats his external IP?
<sprintbot> Sprint Checkin! [whyrusleeping jbenet cryptix wking lgierth krl kbala_ rht__ daviddias dPow chriscool gatesvp]
<alu> well he has like two external IPs [strange]
<alu> 184.98.235.177 via 8080
<alu> 184.98.49.209
<ogd> rschulman_: does whit diffie secretly control the US government y/n
<whyrusleeping> ogd: decline to answer
<ogd> good answer, good answer
<whyrusleeping> alu: i dont see anything on either of those addresses... :/
<alu> Hmm let me see
<rschulman_> ogd: well, we were only at a cocktail party, so I don't think he would have told me if yes
<alu> https://i.imgur.com/7GC6bdm.png forwarding ports
<alu> i told him to forward to 177
<alu> okay he forwarded to the LAN ip
<alu> gunna restart daemon
<alu> 404 not found still
<alu> hrmm
<alu> Have gotten this far, now its a networking issue
<alu> I'm very persistent about getting things to work
<whyrusleeping> yeap, we can probably implement tcp reuseport in windows using SO_REUSEADDR
<whyrusleeping> which would likely help a lot
<alu> Yeah, do you think this is as best we can do right now because
<alu> duplicating these steps will be very hard for other people
<alu> atleast now theres an exe
<Xe> whyrusleeping: hmm that could work
notduncansmith has joined #ipfs
<whyrusleeping> alu: yeah, we need to make the install process easier for sure
notduncansmith has quit [Read error: Connection reset by peer]
<alu> Well good progress has been, testing it on windows
<alu> Gotta move onto other stuff now, thanks for the excellent help whyrusleeping
<whyrusleeping> yeah, thanks for pressing on!
<alu> just one last test
<alu> can you read the file QmeC7hewzaSZDx4SEEjvkfsedTDQxk4sBoHWqUz3aLv1zs
<Xe> trying...
<Xe> it's taking a bit
<Xe> seems really slow
<alu> what command did you use
shea256 has quit [Remote host closed the connection]
<Xe> xena@hyperadmin ~/var/bin $ docker exec ipfs-node ipfs cat QmeC7hewzaSZDx4SEEjvkfsedTDQxk4sBoHWqUz3aLv1zs
<Xe> Error: context deadline exceeded
<alu> Alright, that's all for now
www has joined #ipfs
<alu> his exact words: "I'm supportive of all this but after 4 hours spent to get it running, it's quite a pain in the ass to install"
<Xe> i got it working first try on my coreos machin
<Xe> $ docker run -d --name ipfs-node -v /data/sdc/ipfs/staging:/export -v /data/sdc/ipfs/data:/data/ipfs -p 8080:8080 -p 4002:4001 -p 127.0.0.1:5001:5001 jbenet/go-ipfs:latest
<alu> So is the issues around windows here? https://github.com/ipfs/go-ipfs/issues/959
<alu> Also someting makes me think that installing docker on windows might make it easier
<alu> standardization etc
<Xe> you're probably better off using a linux machine
<rschulman_> is there anywhere a discussion on the pub/sub problem?
<alu> Yeah I use linux
<Xe> rschulman_: i was just gonna use redis
<alu> but many new interest is coming from windows users
<Xe> rschulman_: my use case was using IPFS to store image data for a website
<Xe> and have the images automatically be synched to a CDN
shea256 has joined #ipfs
<alu> this was published like 15 hours ago, anyone try it?
atomotic has joined #ipfs
tilgovi has quit [Ping timeout: 244 seconds]
patcon has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
M-Eric has joined #ipfs
<M-Eric> jbenet: do you have any drafts already in flight on what you'd want from additional unixfs metadata, or pointers to issues, etc?
<whyrusleeping> alu: that was being discussed in the windows issue
<whyrusleeping> i think it was experimental and only worked on an older windows version
<M-Eric> i was chatting with some coreos folks recently and they mentioned desires for a similar thing -- a standard metadata spec, e.g. for feeding into hashing -- and i also have an arbitrary/adhoc implementation of same already for similar reasons, seems like maybe we could all benefit from drafting something together?
<spikebike> metadata is pretty hairy. Everytime I think I have it handled (file permissions), then there's devices, hard and soft links, various timestamps, ACLs, etc.
<alu> Gotcha, and yeah I subscribed to the windows issue
shea256 has quit [Remote host closed the connection]
opn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
opn has joined #ipfs
<M-Eric> that captures nearly everything... except hardlinks, which are super tricky to conceptualize how to handle correctly
<M-Eric> i think ACLs fit under xattrs...? but i'm not actually sure, i don't use them
<M-Eric> but yeah, i definitely headdesk'd as i discovered things about the various timestamps :)
<jbenet> cc wking metadata discussion o/
<jbenet> M-Eric we have a catchall Metadata object that will keep all the relevant unix attributes
<jbenet> It has a merkle ptr to the target object so only need to hash that.
shea256 has joined #ipfs
<jbenet> alu windows is not officially supported yet (it works but the UX is definitely bad, we don't have prebuilt binaries yet or installer or anything)
<alu> do any of you play fighting games
<alu> EVO 2015 starts today
<alu> also whyrusleeping linked me an exe that worked :D
www has quit [Ping timeout: 260 seconds]
shea256 has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
patcon has joined #ipfs
<whyrusleeping> jbenet: ping, when you get a sec
atomotic has quit [Quit: Textual IRC Client: www.textualapp.com]
Encrypt has joined #ipfs
<M-Eric> jbenet, wking: go-ipfs/unixfs/* appears to be the place to look, correct?
<M-Eric> i'm not sure i understand everything in unixfs.proto, can you maybe point me to some docs describing the meaning blocksizes? is the Data field actually a merkle pointer?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<M-Eric> if the current metadata has components that are relevant to ipfs chunking, do you think it would be possible to shuffle them elsewhere, so that we could come out with a completely transport independent thing?
<M-Eric> e.g. i'd like to hash over the metadata and have it describe files at rest regardless of chunking algo, i'm not sure if that currently would be the case?
amiller- is now known as amiller
amiller is now known as Guest23520
Guest23520 has joined #ipfs
Guest23520 has quit [Changing host]
Guest23520 is now known as amiller
shea256 has quit [Remote host closed the connection]
opn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
opn has joined #ipfs
patcon has quit [Quit: Leaving]
shea256 has joined #ipfs
patcon has joined #ipfs
<jbenet> M-Eric the metadata object points to another file. the Data field is actual raw data. metadata does not impact chunking. its possible to add the whole hash of the file "at rest" to the metadata instead of the chunks, but we don't do that. is there a strong reason for needing this?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<M-Eric> i might have to think more about this before saying a confident 'yes', but the temptation has occurred to me, anyway
<M-Eric> so, for example, if i want to build a system that speaks ipfs hashes as the lingua franca for data identity, but then actually resides on disk as a tarball instead of chunks in a database (because that has other desirable properties for some reason, like detachability and export to another system that speaks tar but can't host an ipfs server for ~business reasons~)... it'd be really awesome if the hash doesn't care about that st
tilgovi has joined #ipfs
<M-Eric> but i'm not sure what the most reasonable way to go about that would be, exactly (or indeed if it's actually reasonable; maybe i should give up, and those situations would have different hashes, and there's no mapping between them except to do the transform and suck it up).
kord has quit [Quit: Linkinus - http://linkinus.com]
<jbenet> !pin QmVPscBi5VCEzZiRFHKisQojT9zYS94UkNqYoJTtLV956r
<pinbot> now pinning QmVPscBi5VCEzZiRFHKisQojT9zYS94UkNqYoJTtLV956r
<pinbot> [host 0] failed to pin QmVPscBi5VCEzZiRFHKisQojT9zYS94UkNqYoJTtLV956r: Post http://localhost:5001/api/v0/pin/add/QmVPscBi5VCEzZiRFHKisQojT9zYS94UkNqYoJTtLV956r?enc=json&r=true&stream-channels=true: read tcp 127.0.0.1:5001: connection reset by peer
<pinbot> pin QmVPscBi5VCEzZiRFHKisQojT9zYS94UkNqYoJTtLV956r successful!
<jbenet> wat
<jbenet> !pin QmUoowwU6K74twf4eLtQ5YgkFRfXTDVzqekcDM9KuZsib9
<pinbot> now pinning QmUoowwU6K74twf4eLtQ5YgkFRfXTDVzqekcDM9KuZsib9
<pinbot> [host 0] failed to pin QmUoowwU6K74twf4eLtQ5YgkFRfXTDVzqekcDM9KuZsib9: Post http://localhost:5001/api/v0/pin/add/QmUoowwU6K74twf4eLtQ5YgkFRfXTDVzqekcDM9KuZsib9?enc=json&r=true&stream-channels=true: read tcp 127.0.0.1:5001: connection reset by peer
<pinbot> pin QmUoowwU6K74twf4eLtQ5YgkFRfXTDVzqekcDM9KuZsib9 successful!
<Xe> i think it ded
<jbenet> whyrusleeping: is pinbot happier now? it seems unhappy.
<whyrusleeping> i'm fairly certain that pinbot just doesnt like jbenet
<whyrusleeping> i pinned something this morning, around 30MB, and it worked just fine
<jbenet> M-Eric: it's a tradeoff for sure. there are some benefits to the whole file in one hash (this is what people "usually have done" so they're more used to it). so it may be relevant to include. but the system itself (at least ipfs) doesnt _need_ it.
<whyrusleeping> jbenet: why do we have utp vendored still?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tilgovi has quit [Ping timeout: 246 seconds]
patcon has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
taaz has left #ipfs [#ipfs]
rschulman_ has quit [Quit: Lost terminal]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato1 has quit [Ping timeout: 256 seconds]
border has joined #ipfs
therealplato has joined #ipfs
zabirauf has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
shea256 has quit [Remote host closed the connection]
shea256 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt has quit [Quit: Sleeping time!]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
G-Ray has joined #ipfs
mildred has quit [Quit: Leaving.]
ruby32 has quit [Quit: Leaving]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
freedaemon has quit []
pfraze has quit [Remote host closed the connection]
therealplato has quit [Ping timeout: 244 seconds]
hamstercups has joined #ipfs
<Luzifer> whyrusleeping, jbenet: What do you think about this? http://knut.cc/image/3L3q1U3I1r0o
<whyrusleeping> Luzifer: pretty
therealplato has joined #ipfs
opn has quit [Ping timeout: 264 seconds]
<Luzifer> whyrusleeping: cool! :) Building mail templates makes me wanting to wash my hands with acid… Its so messy… But if it looks good it's worth it :)
<whyrusleeping> yeah, i wouldnt be upset to receive that
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> kachow!
<ipfsbot> [go-ipfs] whyrusleeping force-pushed s3-0.4.0 from f06caf3 to 2dc3ab7: http://git.io/vmgZu
<ipfsbot> go-ipfs/s3-0.4.0 6181904 Tommi Virtanen: gofmt generated assets...
<ipfsbot> go-ipfs/s3-0.4.0 9492cc9 Tommi Virtanen: Remove dead code...
<ipfsbot> go-ipfs/s3-0.4.0 71d9018 Tommi Virtanen: core tests: Stop assuming internals of Config...
<whyrusleeping> Tv`: that tommi virtanen guy pushed broken tests :P
<Tv`> that guy is a jerk
<whyrusleeping> but its all better now
<bret> would not having a swap space cause ipfs to crash?
<jbenet> Luzifer: i would add a gopher with an engineer hat or something. follow travis or circleci templates?
<jbenet> bret: yeah maybe. cc whyrusleeping
<Tv`> whyrusleeping: actually, not sure how; i'm comparing my old branch and what you just pushed, and the only difference is in the assets..
<bret> i just realized my raspi2 ipfs node didn't have a swapfile
<Luzifer> jbenet: still searching for someone with talent for graphics not wanting $249 for a logo… *looking at 99designs*
<whyrusleeping> yeah, the tests for the assets were failing
<whyrusleeping> it actually was not your fault
<whyrusleeping> other than your ran go generate
<whyrusleeping> and the go generate command was wrong
<Tv`> whyrusleeping: oh, a time bomb
<whyrusleeping> yeap
<Tv`> very nice
<whyrusleeping> you just happened to be the poor soul that set it off
<whyrusleeping> :P
<Tv`> whyrusleeping: especially helpful that go-bindata changed its output format too, so comparing wasn't simple
<Tv`> one of the reasons i like becky: it's easy to bundle in the repo, so the output doesn't change accidentally
Leer10 has quit [Ping timeout: 250 seconds]
<whyrusleeping> Tv`: let the record show that i was never against becky
hamstercups has left #ipfs ["Leaving"]
<border> but you where pretty raging in PV :p
* border wait his kick
Leer10 has joined #ipfs
<Luzifer> gn8 everyone
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> Luzifer: gnite! hopefully you get to sleep the entire time ;)
uhhyeahbret has quit [Remote host closed the connection]
<Luzifer> whyrusleeping: thanks! I hope so too. Last night of work before 3 weeks of vacation... In that time only calls from the fire dept can wake me up... ;)
shea256 has quit [Remote host closed the connection]
G-Ray has quit [Quit: Konversation terminated!]
uhhyeahbret has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mdem has quit [Quit: Connection closed for inactivity]
<whyrusleeping> jbenet: diff a.json b.json
<whyrusleeping> uhm
<jbenet> ...
<whyrusleeping> jbenet: for some reason i cant copy paste a link
<whyrusleeping> but i can copy paste everything else...