jbenet changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- Code of Conduct: https://github.com/ipfs/community/blob/master/code-of-conduct.md -- Sprints: https://github.com/ipfs/pm/ -- Community Info: https://github.com/ipfs/community/ -- FAQ: https://github.com/ipfs/faq -- Support: https://github.com/ipfs/support
<whyrusleeping> really hoping so!
<achin> \o/
Matoro has quit [Ping timeout: 260 seconds]
<ianopolous> whyrusleeping: just to confirm: the protobuf version of object.put works through the http api right?
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<whyrusleeping> uh
<whyrusleeping> probably
<whyrusleeping> the http api mirrors perfectly the cli
pinbot has joined #ipfs
<ianopolous> ok thanks
Matoro has joined #ipfs
kode54 has joined #ipfs
patcon has joined #ipfs
OutBackDingo has joined #ipfs
tilgovi has quit [Ping timeout: 240 seconds]
wscott has joined #ipfs
Matoro has quit [Ping timeout: 260 seconds]
patcon has quit [Read error: Connection reset by peer]
<M-davidar> jbenet (IRC): cf Linus' rants about github :p
hellertime has joined #ipfs
<jbenet> whyrusleeping: do we have a quick doc explaining unixfs + all the importer stuff?
<jbenet> i dont think we do
<jbenet> @everyone - does anybody here want to help us by writing one?
<achin> "importer stuff" ?
<whyrusleeping> chunking and layout
<achin> i can help
<achin> though i'm not sure on the details of the ipfs chunker (other than "chunk at X bytes")
<jbenet> whyrusleeping can say more-- or whyrusleeping want to write it? looking for a quick doc with links so we can get it implemented in js soon
<achin> (i also started a while ago a blog post that attempted to explain mergledags, by first building a toy one out of json objects, then going through IPFS's protobuf PBNode and PBLink objects. unixfs might fix in here somewhere)
<whyrusleeping> the simple one is just 'chunk at X' bytes
<whyrusleeping> and the basic layout is a balanced tree with data in leaf nodes, doing an in order traversal reads the correct data
Matoro has joined #ipfs
Guest73396 has joined #ipfs
<achin> tonight, though, i think i am going to try to dig into this patch add-link issue
patcon has joined #ipfs
patcon has quit [Read error: Connection reset by peer]
<jbenet> responded
<ion> Alright, I’ll try to write some sharness tests tomorrow.
e-lima has joined #ipfs
kode54 has quit [Ping timeout: 240 seconds]
elima_ has quit [Ping timeout: 244 seconds]
uhhyeahbret has quit [Ping timeout: 252 seconds]
voxelot has quit [Ping timeout: 250 seconds]
kode54 has joined #ipfs
patcon has joined #ipfs
<achin> any ipfs-devs around?
Guest73396 has quit []
HoboPrimate has quit [Quit: HoboPrimate]
<noffle> whyrusleeping: does https://godoc.org/github.com/ipfs/go-ipfs-api#Shell work for the common node/api interface we were talking about?
ploopkazoo has quit [Quit: ZNC - 1.6.0 - http://znc.in]
ipfsPics-Didier has joined #ipfs
ReactorScram has left #ipfs ["busy this month"]
ploopkazoo has joined #ipfs
<ipfsPics-Didier> Hello there, anyone could tell me how many possible hashes there are with IPFS' hash function?
<ion> The current implementation defaults to SHA-256, that’s 2^256.
<ipfsPics-Didier> Thanks!
<noffle> whyrusleeping: well, with an interface in front of it
patcon has quit [Read error: Connection reset by peer]
ipfsPics-Didier has quit [Quit: Page closed]
zuul is now known as juul
xelra has quit [Ping timeout: 268 seconds]
xelra has joined #ipfs
voxelot has joined #ipfs
Pent has joined #ipfs
ygrek has quit [Ping timeout: 246 seconds]
<whyrusleeping> noffle: yeah, that would be good
<whyrusleeping> its definitely not the complete interface we want
<whyrusleeping> but its a good start
<achin> whyrusleeping: do you have a few minutes?
danslo has joined #ipfs
<achin> i'm narrowing in on #1925 and it's getting to the point where i need help
edrex has joined #ipfs
legobanana has quit [Quit: Textual IRC Client: www.textualapp.com]
legobanana has joined #ipfs
Matoro has quit [Ping timeout: 260 seconds]
<whyrusleeping> achin: sure, ive got a minute, but i do need to run soon
<whyrusleeping> er, nvm. i actually need to run right now, but if you PM me or send me an issue link i can help out
<achin> briefly, i think something is going wrong in Node.Copy()
<achin> no prob, i'll write something up for github
hellertime has quit [Quit: Leaving.]
voxelot has quit [Ping timeout: 252 seconds]
voxelot has joined #ipfs
<achin> oh man, this is gnarly
slothbag has quit [Quit: Leaving.]
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
<noffle> whyrusleeping: that's fine. I can start with a super minimal interface (one that satisfies what I need for ipget) that can be satisfied by both api and node, and folks can expand it as functionality is needed
ygrek has joined #ipfs
Oatmeal has quit [Ping timeout: 250 seconds]
Oatmeal has joined #ipfs
Matoro has joined #ipfs
pfraze has quit [Remote host closed the connection]
sseagull has quit [Quit: leaving]
harlan_ has joined #ipfs
sonatagreen has quit [Ping timeout: 264 seconds]
pfraze has joined #ipfs
ygrek has quit [Ping timeout: 260 seconds]
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
pfraze has quit [Remote host closed the connection]
reit has quit [Quit: Leaving]
martinBrown has quit [Quit: -]
martinBrown has joined #ipfs
<ion> achin++
carstn has joined #ipfs
carstn has quit [Client Quit]
reit has joined #ipfs
ocdmw has quit []
<M-davidar> !m achin
<M-davidar> :/
voxelot has quit [Ping timeout: 240 seconds]
<M-davidar> [0__0]: help
<M-davidar> apparently botbot's dead
uhhyeahbret has joined #ipfs
mrcstvn has joined #ipfs
mrcstvn has quit [Max SendQ exceeded]
sharky has quit [Ping timeout: 260 seconds]
NightRa has joined #ipfs
mrcstvn has joined #ipfs
r1k0 has quit [Remote host closed the connection]
Eudaimonstro has quit [Remote host closed the connection]
<whyrusleeping> botbot and multivac
sharky has joined #ipfs
dcposch has joined #ipfs
<jbenet> yeah not sure-- botbot's dead on many chans
fingertoe has quit [Ping timeout: 246 seconds]
<M-davidar> yeah, i'm not sure why multivac keeps dying :/
<M-davidar> possibly unrelated
<M-davidar> oh, that's interesting
<M-davidar> Connecting to irc.freenode.net:6667...
<M-davidar> Connection error: [Errno -2] Name or service not known
<M-davidar> has freenode been messing with DNS records or something? :/
harlan_ has quit [Quit: Connection closed for inactivity]
qqueue has quit [Ping timeout: 264 seconds]
qqueue has joined #ipfs
qqueue has quit [Read error: Connection timed out]
qqueue has joined #ipfs
amstocker has quit [Ping timeout: 244 seconds]
M-hrjet has joined #ipfs
* M-davidar waves to hrjet
* M-hrjet waves back ;) Been following IPFS for long, but the channel is too active to keep up :)
<M-davidar> hehe, yeah, there's a lot going on :)
<M-davidar> feel free to ask questions though
<M-hrjet> Has anyone been working on a proxy protocol for IPFS?
<M-hrjet> I would like to integrate IPFS support into an application, but I don't want to open peer ports directly in the app.
<M-hrjet> Rather, I would like to run the IPFS server on a server machine somewhere, and proxy to it from the app.
<M-davidar> yeah, definitely, you can use the gateway and/or http api endpoint
<M-hrjet> Cool, thanks. Will read up on that.
<M-davidar> there's already a few libraries to make this easier: https://github.com/ipfs/ipfs/issues/83
<M-davidar> hrjet: what kind of application is it?
domanic has joined #ipfs
<M-davidar> ooh, cool :)
chriscool has quit [Ping timeout: 246 seconds]
<ipfsbot> [go-ipfs] whyrusleeping created fix/nil-data-marshal (+1 new commit): http://git.io/vliwQ
<ipfsbot> go-ipfs/fix/nil-data-marshal dbb9ec4 Jeromy: set data and links nil if not present...
<M-davidar> hrjet: you're also welcome to open an issue in https://github.com/ipfs/apps/issues if you want to discuss any specific concerns about integrating ipfs into gngr
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1935: set data and links nil if not present (master...fix/nil-data-marshal) http://git.io/vlirk
<whyrusleeping> achin: good job on that, thanks!
mildred has joined #ipfs
<M-davidar> hrjet: oh, you're the guy that made the FLIF polyfill?
konubinix has quit [Quit: ZNC - http://znc.in]
konubinix has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/nil-data-marshal from dbb9ec4 to 0d35cc9: http://git.io/vliML
<ipfsbot> go-ipfs/fix/nil-data-marshal 0d35cc9 Jeromy: set data and links nil if not present...
<Saurischia> [go-ipfs] whyrusleeping force-pushed fix/nil-data-marshal from dbb9ec4 to 0d35cc9: http://git.io/vliML
multivac has joined #ipfs
<M-hrjet> davidar: Sorry for the lag, my net is having some issues. Will have a look at the issues and docs.
<M-davidar> hrjet: np. You might also be interested in https://github.com/ipfs/apps/issues/8 :)
legobanana has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
Pharyngeal has quit [Ping timeout: 272 seconds]
mrcstvn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
[0__0] has joined #ipfs
Pharyngeal has joined #ipfs
<M-hrjet> The HTTP API endpoint doesn't seem to be authenticated ?
<M-davidar> hrjet: it has CORS enabled
<M-davidar> but generally you don't want to be opening it up to the public
<M-davidar> there's currently work going on in getting a more-authenticated subset of the API into the gateway itself
<M-davidar> (and then eventually on the ipfs.io gateway)
<M-hrjet> I guess I am looking for a simple IPFS daemon, that I can host on a VPS (or inside my local network, or even my laptop), which only does one thing: fetch and cache IPFS requests and serve them via HTTP.
<M-hrjet> Applications can then use this daemon for translating any ipfs:// addresses into http:// requests.
s_kunk has quit [Ping timeout: 260 seconds]
<M-davidar> hrjet: that's what the gateway does
<ipfsbot> [go-ipfs] whyrusleeping created refactor/transport (+1 new commit): http://git.io/vlPvz
<ipfsbot> go-ipfs/refactor/transport 6de20f9 Jeromy: refactor net code to use transports, in rough accordance with libp2p...
gsdgdfs has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1937: refactor net code to use transports, in rough accordance with libp2p (master...refactor/transport) http://git.io/vlPfP
<M-davidar> is just the ipfs gateway (which you can self-host)
<M-davidar> i used to have an ipfs gateway running (on a small vps) on my own domain, for example
<M-hrjet> Oh, where's the source for that? Is it part of https://github.com/ipfs/go-ipfs ?
Transisto2 has quit [Ping timeout: 264 seconds]
<M-davidar> yeah
<M-davidar> by default it listens on port 8080
<M-davidar> which you can proxy with nginx to a public port
kode54 has quit [Ping timeout: 264 seconds]
<M-hrjet> And the auth could be done via nginx! Sounds good. Thanks davidar.
NightRa has quit [Quit: Connection closed for inactivity]
kode54 has joined #ipfs
zz_r04r is now known as r04r
Tv` has quit [Quit: Connection closed for inactivity]
donpdonp has quit [Ping timeout: 265 seconds]
e-lima has quit [Ping timeout: 255 seconds]
bmcorser_ has joined #ipfs
reit has quit [Ping timeout: 250 seconds]
bmcorser_ has quit [Client Quit]
rendar has joined #ipfs
e-lima has joined #ipfs
s_kunk has joined #ipfs
<ipfsbot> [go-ipfs] jbenet pushed 1 new commit to master: http://git.io/vlPVi
<ipfsbot> go-ipfs/master b508c23 Juan Benet: Merge pull request #1935 from ipfs/fix/nil-data-marshal...
zugzwanged has joined #ipfs
dignifiedquire has joined #ipfs
zugz has quit [Ping timeout: 272 seconds]
reit has joined #ipfs
[0__0] has quit [Ping timeout: 255 seconds]
ianopolous has quit [Ping timeout: 265 seconds]
<stoopkid> so, are there are any services or frameworks for services that allow people to post content to the ipfs network without hosting an ipfs node themselves?
<stoopkid> i.e. through the browser
NeoTeo has joined #ipfs
cemerick has joined #ipfs
<locusf> jbenet: whyrusleeping: is there a problem if I use a self-made library, licensed in GPLv3, to use it as block translator?
<cryptix> stoopkid: the public gateways will soon allow POSTing data through http
<stoopkid> ah, interesting
<cryptix> (aka writable gateway https://github.com/ipfs/infrastructure/issues/105 )
<stoopkid> do you know where i can find info on how to set up an ipfs gateway?
<ipfsbot> [go-ipfs] cryptix force-pushed fix/compliantWritableGW from 3c3683a to 385749e: http://git.io/vlGAp
<ipfsbot> go-ipfs/fix/compliantWritableGW 385749e Henry: writable gateway: added tests from @ion1's RFC summary...
<victorbjelkholm> word of caution though, the writable gateway won't store your things forever. That's where openipfs comes into play
<victorbjelkholm> (working title)
<stoopkid> hehe
gsdgdfs has quit [Ping timeout: 255 seconds]
martinkl_ has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
LiterallyBecca has joined #ipfs
<cryptix> stoopkid: the go-ipfs impl comes with the gateway by default (localhost:8080)
<stoopkid> ah hm
<stoopkid> i see
<cryptix> the writable part is also already in but a bit rough and will change a little to be more compliant with http rfc specs
<stoopkid> gotcha, thank you very much
<cryptix> de nada :) cant wait to see more stuff built on top
<victorbjelkholm> is there any list of apps that we can suggest people to write?
<victorbjelkholm> guessing this would be the place otherwise: https://github.com/ipfs/awesome-ipfs/issues/7
<M-davidar> VictorBjelkholm (IRC): submit an issue to ipfs/apps :)
<victorbjelkholm> M-davidar, seems like the whole issue tracker of ipfs/apps is that list :D
<M-davidar> basically, yeah ;)
<ipfsbot> [go-ipfs] cryptix pushed 1 new commit to fix/compliantWritableGW: http://git.io/vlXOd
<ipfsbot> go-ipfs/fix/compliantWritableGW d251b34 Henry: compliant writableGW: updated tests with expected data...
mildred has quit [Ping timeout: 260 seconds]
NightRa has joined #ipfs
Eudaimonstro has joined #ipfs
mildred has joined #ipfs
<bergie> hi! we're looking at IPFS for https://thegrid.io/ and I have couple of questions
<bergie> 1) what's the storage overhead (if any) for ipfs-added file vs just straight file in the filesystem
<bergie> 2) we don't generally deal with files, but instead with page contents coming through our MsgFlo network. I'd like to still represent a site as a full IPFS directory structure. Is there a way to make partial updates (change only certain files) via ipfs-api
<bergie> 3) is there a recommended way to expose the IPFS address of a website when serving it via regular HTTP
[0__0] has joined #ipfs
<M-davidar> hi bergie :)
<M-davidar> 1) files will be copied into the ipfs blockstore, so will potentially double the storage requirements
<M-davidar> however, neocities is storing lots of websites in ipfs, so minimising this overhead is on the radar
<M-davidar> 2) i think this is (or soon will be) possible with mfs (see ipfs files --help on the 0.4-dev version)
<bergie> before looking at IPFS we were thinking to just gzip all the pages, since that reduces their size about 90%. But then I think it wouldn't work nicely with stuff like https://gateway.ipfs.io/ipfs/QmbeKSEPE2fdhKxBi1tAtJcuCYBL6VeoDW8iKnvuyJ97Xk
<M-davidar> yeah, ipfs will automatically compress it's copy of files soon
<M-davidar> and it already does deduplication, which also helps
<M-davidar> 3) you mean the hash?
<bergie> M-davidar: yeah, I mean the hash at 3) currently I'm looking at sending a Link rel="alternate" header with a ipfs:// URL
<bergie> though http://mementoweb.org/guide/rfc/ sounds also interesting for this, especially if we start keeping track of old website hashes
<M-davidar> bergie: i think neocities is also doing something with publishing the ipfs hashes along with websites - you should probably speak to kyledrake about it
<M-davidar> (here's in here every so often)
<M-davidar> *he's
<bergie> yeah, our case is very similar to theirs. thanks!
<bergie> what's the approximate timeline for the mfs stuff (partial updates to an existing ipfs-added directory)?
dignifiedquire has quit [Quit: dignifiedquire]
<M-davidar> it's already in the development branch, but i'm not sure what the timeline on that being released is
<M-davidar> i think it should be within the next few months, but you'd have to ask whyrusleeping
<M-davidar> (who should probably be around in a few hours)
<bergie> cool, I'll stick around :-)
<bergie> right now what we do is simply ipfs add each file on its own, and store a map between URL and the hash so we can ipfs cat the files when they're requested, so there is no directory relationship on IPFS level
<bergie> the IPFS hash is then propagated to our other servers so they can pin it at their leisure
<cryptix> re partial updates: you can already do that with 'ipfs object patch'
<cryptix> iirc the mfs api will just make it a bit nicer
martinkl_ has joined #ipfs
<bergie> OK, are there any docs? I poked a bit around the ipfs object API via https://www.npmjs.com/package/ipfs-api
<M-davidar> ipfs object patch --help
<cryptix> yup - not so sure about docs on the js-ipfs-api
<cryptix> but you want /api/v0/object/patch
<bergie> OK, so basically a directory is represented as a IPFS hash with links to files inside the directory as name+hash combos?
<M-davidar> yeah
<bergie> is so, it sounds like I can just ipfs add the individual files as I already do, and then ipfs object put/patch to create a "pseudo-folder" to contain them
<cryptix> bergie: right - i started organizing lots of stuff like that
<bergie> which I assume produces a new hash that then (in my case) represents the current state of a website
<bergie> so very similar to how we deal with git blobs and trees
<M-davidar> yeah, it's very similar to git
<ipfsbot> [go-ipfs] cryptix pushed 1 new commit to fix/compliantWritableGW: http://git.io/vlXik
<ipfsbot> go-ipfs/fix/compliantWritableGW aa7b637 Henry: writableGW: passing non-meaningful tests...
<cryptix> yup ipfs unixfs dirs are basically git trees, just without the acl metadata
<cryptix> (which is planed to come in, too - we just need to find a smart way to do it without breaking content dedup)
mildred has quit [Ping timeout: 265 seconds]
<bergie> So with that in our case the process could be something like:
<bergie> Receive a list of web pages to update and the current hash for the website. ipfs add the web pages as "blobs". Then patch the website hash to add links to the new blobs. Store website hash
dignifiedquire has joined #ipfs
<cryptix> yup that should work
TheWhispery has joined #ipfs
<M-davidar> bergie: I'm interested in doing something similar with ipfs/archives, so let me know how you go with that :)
e-lima has quit [Ping timeout: 244 seconds]
* Stskeeps waves to bergie
<bergie> M-davidar: will do. Some experimentation to be done this week :-)
<bergie> hey Stskeeps !
<Stskeeps> :)
TheWhisper has quit [Ping timeout: 246 seconds]
besenwesen has quit [Quit: ☠]
JasonWoof has quit [Ping timeout: 244 seconds]
besenwesen has joined #ipfs
besenwesen has quit [Changing host]
besenwesen has joined #ipfs
<bergie> cryptix: it looks like there are at least some object API tests in https://github.com/ipfs/js-ipfs-api/blob/master/test/tests.js#L374
<bergie> no patch there though
Zuardi has quit [Remote host closed the connection]
Zuardi has joined #ipfs
e-lima has joined #ipfs
<M-davidar> ping VictorBjelkholm
<victorbjelkholm> pong!
<victorbjelkholm> bergie, what are you trying to do? Patch as in update an object?
<M-davidar> VictorBjelkholm (IRC): yeah, he wants to add files to a directory already in ipfs
<victorbjelkholm> you wouldn't do that, you would add the file and re-add the directory, no?
<M-davidar> you could, but it's not as efficient (esp. for larger directories)
<M-davidar> bergie: i assume there's a reason you're not just readding the whole directory?
<cryptix> bergie: im sure a PR would be welcome :)
cemerick has quit [Ping timeout: 240 seconds]
<victorbjelkholm> M-davidar, well, the contents of the directory have changed so the directory would have to be re-added. Am I missunderstanding how IPFS works? The objects are immutable
<victorbjelkholm> If you want mutability, you would use IPNS
<bergie> victorbjelkholm: we don't have directories on FS level in this case, just some data in a database
<bergie> victorbjelkholm: with ipfs object patch you take an existing object, change it in some way, and get a new hash out
<bergie> (if I understood it correctly)
<cryptix> victorbjelkholm: object/pach gives you a new root hash after an operation
<victorbjelkholm> bergie, you could just add the data JSON formatted with "ipfs add" then parse the JSON when getting the hash again
<victorbjelkholm> ah, I see
<victorbjelkholm> object patch is not supported in js-ipfs-api yet though https://github.com/ipfs/js-ipfs-api/issues/101
<victorbjelkholm> haha, that's the issue bergie just created :p Sorry
<bergie> in our case typically a website has hundreds of pages. When user changes something, we only touch the files affected (say, front page and an article page). So we'd like to take the existing hash, and ipfs object patch those in
<victorbjelkholm> that makes sense
<bergie> all Grid websites are also represented by a repo on GitHub, so we already do something similar there (add each new/updated page as blob, update tree reference, create a new commit)
<victorbjelkholm> I see, would make sense to use patch then, instead of re-adding everything
<cryptix> it also free's you from aggregating all the data just to add/remove a link of some structure
<victorbjelkholm> bergie, can you try though and use the add method and just replace the hash? Not sure if/if not is performant enough for you
<victorbjelkholm> until we can get patch into js-ipfs-api that is
<M-davidar> cryptix: are there plans for directly translating git repos into ipfs directories (rather than just storing the raw git objects in ipfs)?
<bergie> victorbjelkholm: that, or maybe `ipfs object add` could do for now. Though it'll become a bit heavy to do down the line
<cryptix> M-davidar: yea we thought about that
<victorbjelkholm> bergie, I think you would prefer "ipfs add" (which you still can pass whatever data you want, doesn't have to be an actual file) rather than "ipfs object add". As I understand, ipfs object add is more low-level for people willing to construct their own objects
<cryptix> M-davidar: my time is thin but thats actually the plan for git-remote-ipfs - to act as a translator for git but storing native ipfs chains and trees, not git objects
<M-davidar> cryptix: ah, that sounds awesome
<bergie> victorbjelkholm: I already have the hash for each page object at this point, so object add might be easier since I only need to make links
<bergie> cryptix: hmm... which reminds. We have all pages also on S3. Might be interesting to have a way to "proxy" those to IPFS somehow
<cryptix> M-davidar: yup, think so too - will also make it easy to build viewers and all that other awesome stuff
<cryptix> bergie: there is plan to support S3 as a blockstore after/in 0.4 iirc
<cryptix> but you would need to "readd" your content.. :-/
<cryptix> there is also plan for 'thin block store' because people dont want to 2x their disk usage if they pump in their media libraries
<cryptix> so some cross section of those two might give you that but not yet
<M-davidar> yeah, i asked jbenet about importing existing s3 buckets into ipfs a while ago... but i think the answer was basically to re-add everything :p
<M-davidar> iirc the thin blockstore idea isn't very high priority
<bergie> yeah, for now we can deal with having them in both... that is essentially how our current prototype works: the "design AI" produces pages, uploads to S3, then tells our serving network about the updated pages.... which then downloads from S3, adds to IPFS, and starts serving
<bergie> this all works OK, and gives us triple redundancy (all pages are on S3, git, and IPFS). Our web servers try to serve from IPFS, but failing that fall back to S3 *and* add to IPFS
<bergie> end users can use either the git repo or the IPFS hash to download/backup their sites
cemerick has joined #ipfs
<M-davidar> bergie: cool, i wish more sites did that :)
<cryptix> yup thats very cool
<bergie> M-davidar: the primary use case for us was a federated cache that our web servers can share. But making the sites available also via IPFS is certainly a nice bonus :-)
<M-davidar> bergie: and eventually it will be able to offload bandwidth from your webservers ;)
<bergie> yep. I'm certainly looking at ipns and all that with interest
<M-davidar> i'm also interested in making a webhook to automatically mirror github pages sites to ipfs at some point
mildred has joined #ipfs
<bergie> M-davidar: that should be quite doable with GH API... receive a build hook, get tree from the build's commit, fetch the files, ipfs add
<M-davidar> bergie: yeah, definitely, it's just a matter of getting the service to do it setup
<bergie> M-davidar: I think the main downside is that you need to be repo admin to be able to set up a hook, so you can't do that for 3rd party GH Pages sites
<M-davidar> bergie: yeah, it would be an opt-in thing
hellertime has joined #ipfs
<M-davidar> then we start a campaign to get everyone to enable it :p
mildred has quit [Ping timeout: 240 seconds]
<M-davidar> once we get the dmca handling process set up, i'd also like to start crawling third party websites and dumping them into ipfs
<bergie> yep. Would probably be an afternoon's work to create a Node.js app that allows people to log in with GH and set up mirroring for their repos, and then shows hashes for the enabled repos. Maybe something I could hack some evening at c-base :-)
<M-davidar> bergie: that would be awesome :D
cemerick has quit [Ping timeout: 240 seconds]
<bergie> then set up https://devcenter.heroku.com/articles/heroku-button for super-easy installation
<M-davidar> i suspect we could probably host a public service on one of our servers
<M-davidar> I'd also really like it to set up appropriate DNS links so that you can access the site at <github-username>.gh.ipfs.io
<mue_> bergie, do you regularly hang out there?
<M-davidar> I'd have to bug lgierth about how to actually do that though
<bergie> mue_: now that we're using IPFS, sure :-)
<mue_> bergie: oh, we are? (;
<bergie> M-davidar: yeah, the same Node.js app could be running anywhere. Heroku would just give a convenient option for people who don't want to deal with setup
<bergie> mue_: (The Grid is)
<mue_> bergie, i see
danielrf has quit [Read error: Connection reset by peer]
<M-davidar> bergie: oh, you mean installing a full ipfs daemon on heroku as well?
<bergie> mue_: as for http://c-base.org/ ... I would think an interplanetary filesystem would make sense for a space station ;-)
danielrf has joined #ipfs
<bergie> M-davidar: good point, that wouldn't work since we can't expose non-HTTP ports there :-(
<M-davidar> :(
<bergie> unless there is a way to "HTTP-proxy" IPFS
<M-davidar> actually, i think i remember a secret plan about this kind of thing being mentioned recently
<M-davidar> not the github part, but the other bit
gaboose has joined #ipfs
<M-davidar> not sure what the timeline on that is though
<zignig> M-davidar: o/
<M-davidar> zignig: \o
<zignig> did your day consist of 4 minutes of horse racing ?
<M-davidar> zignig: lol, someone just told me it was melb cup today :p
noffle has quit [Ping timeout: 240 seconds]
<M-davidar> so, not really
gorhgorh has quit [Quit: No Ping reply in 180 seconds.]
<zignig> ;)
<zignig> we got free pizza and beer. YAY
noffle has joined #ipfs
<M-davidar> zignig: ipfs me some of that pizza
silotis has quit [Quit: No Ping reply in 180 seconds.]
gorhDroid has joined #ipfs
silotis has joined #ipfs
<zignig> !pin QmdqapbMFFEuxa726fMeh9K4FdJpcjzdTQaTYKjWowyEuJ
<pinbot> now pinning /ipfs/QmdqapbMFFEuxa726fMeh9K4FdJpcjzdTQaTYKjWowyEuJ
<zignig> now every one can have a slice.
* M-davidar mmm, tastes like liquid crystal
<zignig> M-davidar: have you done anything with the arXiv repo ?
<M-davidar> zignig: if by anything you mean not much, yes :p
* zignig is thinking a word2vec that includes all the LaTeX references.
<M-davidar> zignig: feel free to go ahead and do it ;)
<zignig> M-davidar: lack of time... or too many projects can't tell ;P
<M-davidar> and/or complain about what we could do to make such things easier to do ;)
<M-davidar> yeah, I know how you feel :(
<zignig> otherwise we might have to stand up and stop typing.
<zignig> BLASHPEMY!
<M-davidar> why stand up when my chair has wheels?
<bergie> M-davidar: subscribed, thanks!
mildred has joined #ipfs
JasonWoof has joined #ipfs
zugzwanged has quit [Ping timeout: 260 seconds]
zugz has joined #ipfs
sstangl_ has joined #ipfs
sstangl has quit [Ping timeout: 240 seconds]
lachenmayer_ has quit [Ping timeout: 240 seconds]
alpounet has quit [Ping timeout: 240 seconds]
lachenmayer has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
alpounet has joined #ipfs
rawtaz_ is now known as rawtaz
arpu has joined #ipfs
dignifiedquire has joined #ipfs
border0464 has joined #ipfs
pfraze has joined #ipfs
e-lima has quit [Ping timeout: 250 seconds]
chriscool has joined #ipfs
ilyaigpetrov has joined #ipfs
ilyaigpetrov has left #ipfs [#ipfs]
e-lima has joined #ipfs
vijayee_ has joined #ipfs
mappum has quit [Ping timeout: 240 seconds]
mappum has joined #ipfs
bret has quit [Ping timeout: 240 seconds]
bret has joined #ipfs
pfraze has quit [Remote host closed the connection]
<nicolagreco> how many bytes is a ipfs hash?
M-matthew has quit [Quit: node-irc says goodbye]
M-matthew has joined #ipfs
pfraze has joined #ipfs
<mmuller> nicolagreco: in general, it is variable. see https://github.com/jbenet/multihash
<mmuller> though the ones on my node kinda look like sha256, so 256 bits = 32 bytes
<mmuller> though the base-58 encoding gives a 46 byte ascii representation.
<achin> don't forget the header -- 1 byte to identify the hash fucntion, 1 byte to identify the hash length, and then a variable number of bytes for the hash
<rendar> is that important for ipfs not having collisions on the hash function? because for example git doesn't care if sha1 collissions are found, in a practical way
gaboose has quit [Quit: Connection closed for inactivity]
<achin> git and ipfs are both pretty much the same in this regard -- they assume in practice no collisions will happen
<rendar> achin: yep
<mmuller> I expect that the more secure hash is probably more important than in git, because it is effectively a global address.
<rendar> achin: but while git doesn't really care if sha1 collissions happen or not, does ipfs? does some security issue of ipfs rely on that hash is a cryptohash?
<mmuller> If you could engineer a hash value, you could substitute one object for another in the network.
<achin> like i said, i think they are the same. both git and ipfs will have problems if a collision is actually found -- data will be lost
<mmuller> right, but with git you only have to worry about it in the scope of a repository.
<achin> and in ipfs, you'd only have to worry about it for nodes that include (indirectly) the hash that was a collision
<mmuller> right, but by spoofing the hash, you'd break all of those nodes and evertying that ever referenced them (it's the "permanent web")
<M-hrjet> achin: which could be anywhere in the network?
munkey has joined #ipfs
<mmuller> with Git, you'd need repository access and you'd end up just breaking history of that repo.
<achin> mmuller: that's right. but there are a lot of trees in IPFS
<M-hrjet> I also had the global hash collision concern, and I thought it is reduced by having two levels of hash, the creator hash and the file hash.
<achin> so it seems like the damage will be entirely contained to trees that include the collision. all other trees will be unaffected
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<rendar> sorry i didn't explain it better: i meant that there is no way that an *actual* file will have the same hash in both git and ipfs because sha1 and sha256 hashes are way too big to contain the same value for 2 *meaningful for humans* objects. said that, i was wondering if some attackers that know how to get collissions from sha1 or sha256 can attack git or ipfs: for git i don't see the problem,
<rendar> because if the collission represent a non-human data, its ok, what about ipfs? does ipfs rely more on the security of crypto hashes than git or not?
<mmuller> achin: yes, the demaged is contained to the objects that reference the compromised object. I guess the concern would be if it's a "really important" object.
<mmuller> rendar: I think it relies a little more because there's no additional barrier of "write access to the repository."
<M-hrjet> "because sha1 and sha256 hashes are way too big to contain the same value". I don't buy this argument. Probability can't guarantee anything. A collision could happen for the next twenty files that I add or it may not happen for another million centuries.
wscott has left #ipfs ["Leaving"]
<M-hrjet> It's really quite bizarre that this concern is pushed under the rug, both in git and ipfs.
<achin> i'm sorry, i still don't see a big difference between git and ipfs with regard to collisions
<rendar> mmuller: hmm, write access? what you mean?
<M-hrjet> From a security point of view, it is fine to have that argument of probability (because it could be argued that it is hard to deliberately find a collision in a crypto hash). But my concern is more about robustness.
<rendar> M-hrjet: it may happen, but do you actually realize that there are more probabilities that a group of wolves attack you in the next 2-3 hours, and *another* group of wolves totally independent from the first one attack some of your friends?
<mmuller> rendar: even if I engineer a collision on a revision in your repo, I still need to be able to get it into your repo.
<rendar> mmuller: oh, right
<mmuller> whereas with IPFS, if I were to engineer a collision, I could simply publish the object.
<achin> mmuller: that's a great point, yes
<rendar> mmuller: exactly
<M-hrjet> rendar: the probability doesn't matter from a robustness point of view (it matters only from a security point of view).
<achin> that's a good and important difference, you're right
<rendar> mmuller: basically every git repository is separated each other, but with ipfs its like you've everything in a huge and universal repository
<mmuller> right
<rendar> M-hrjet: yep, i agree
<rendar> basically i formulated the question because of the multihash: why multihash is actually employed in ipfs, if sha256 its enough big to provide an hash function that will work forever?
<achin> maybe in 3 or 30 years from now, a researcher finds a way to make collisions more likely.
<rendar> achin: that's the point..
<achin> then we upgrade the hash function
<rendar> achin: even if you can make collision with sha256, but they are not human meaningful files, who cares?
<rendar> or, ipfs cares about that more than git?
<achin> what are "human menaingful files" ?
<achin> aren't all files in ipfs meaningful to humans?
<rendar> achin: some data that can be understand by human/programs generated that data
<rendar> achin: if i have "hello" and "diojfoiwjpirynnmnrmr" which have the same sha256 hash
<rendar> who cares about that?
<rendar> and how that can damage ipfs?
<achin> you can
<achin> uhh
<achin> you care*
<rendar> how?
<achin> (assuming you added the "hello" file)
<achin> because if you try to get out the "hello" file you might actually get back the "diojfoiwjpirynnmnrmr" file instead
<rendar> of course, but you won't never have "diojfoiwjpirynnmnrmr" because it doesn't mean anything
<achin> what do you mean?
<ansuz> "diojfoiwjpirynnmnrmr" was my grandmother's name
<ansuz> you jerk
<rendar> achin: what i mean is that there are 2 kind of files: files that makes sense to human/software generated them, and files don't (e.g. ranodm data)
<rendar> achin: of course, 99% are random data, and 1% are files that humans understand
<rendar> so if there are collissions between the first group to the second (which is the *only* likely event, from a probabiliy standpoint)...who cares?
<rendar> i mean, who cares if some random data has the same hash of "hello" ?
<achin> do you see my point about how you might get the random data, instead of the data you want?
<rendar> achin: of course yes
<rendar> achin: that's the basic problem of collissions in both git and ipfs
<mmuller> there's the case achin describes involving blocking transmission of an object by transmitting a garbage object with the same hash
<ansuz> ipfs *should* fetch the closest file with a matching hash, right?
<rendar> mmuller: hmm well
<mmuller> there's also the case where someone has engineered meaningful data with the same hash
<rendar> mmuller: achin means that if i can find sha256 collissions i can send that random blob to mess up things on purpose?
<mmuller> the latter case is very very hard to do, but not out of the real of possiblity
<achin> does ipfs have a notion of "cloest file"? it probably has a notion of "closest peer"
<mmuller> achin: I don't think so, I think it always requires a hash match.
<rendar> for that ipfs would require a trie structure
<mmuller> though for some definition of "closeness"... you could consider a hash value a region of proximity :-)
<achin> rendar: yes, that's what i mean
<rendar> too huge to contain all hashes of all files
martinkl_ has joined #ipfs
<rendar> achin: oh, ok, i see your point now
<rendar> achin: yes, that's a security concern then
martinkl_ has quit [Max SendQ exceeded]
<rendar> achin: because that would violate the ipfs stability itself
<achin> if fileA and fileB have the same hash, anyone who asks for fileA (by its hash) might instead receive fileB. and they might not know anything went wrong!
<rendar> achin: yep, because ipfs its open, everyone can add stuff, so even evil guys with their sha256-collided random blobs
<achin> that's right
domanic has quit [Ping timeout: 246 seconds]
martinkl_ has joined #ipfs
<M-hrjet> Is anyone else worried about non-intentional collisions too?
<achin> im' not worried about either
<mmuller> M-hrjet: I've learned not to :-)
<M-hrjet> I am not able to get over it.
<M-hrjet> IPFS is supposed to be a *filesystem*.
<M-hrjet> Imagine if your local filesystem stored files this way.
<achin> if you found sha256 collision, you would be very very famous. added to the history books
<mmuller> I think most of the major cloud storages de-dupe this way.
<mmuller> 256 bits is an extremely big number.
<ansuz> I think it would be enough to document why you're more likely to get the meaningful data
<M-hrjet> mmuller: In a global space, without verification? I doubt that.
<M-hrjet> Or if they did, I wouldn't go near them.
<ansuz> ie. it's more likely to stay pinned
<ansuz> and especially more likely to be pinned by a nearby peer, by that logic
<ansuz> but most people don't know about DHTs, so they're more likely to think of it as a safety deposit box
<achin> if an attacker has enough resources to find a collosion, it seems reasonable to assume they have enough resources to put up their own ipfs nodes in every datacenter in the world, and use them to publish the collosion
munkey has quit [Ping timeout: 246 seconds]
<achin> (interesting fact: humans are *terrible* at producing random data)
<clever> achin: another factor for IPFS, i how the files get chunked up
<clever> even if you have a 4mb and a 5mb file that have identical sha256 hashes, IPFS will break them up into smaller chunks, which dont collide
<clever> and then hash lists of hashes, to create the id you get back
<achin> which *might not* collide
<clever> yeah, you would need a collision on the hash(list of hashes) to replace an entire file
<achin> heck, it's even possible that if you take a 1GB file, chunk it into a bunch of 50MB files, that two of those 50MB chucks might have a hash collision
<clever> a collision on one chunk would only replace a portion of the file
munkey has joined #ipfs
<clever> yep
<clever> but getting a collision on a list of hashes is going to be even more rare
<victorbjelkholm> "A mass-murderer space rock happens about once every 30 million years on average. This leads to a probability of such an event occurring in the next second to about 10-15. That's 45 orders of magnitude more probable than the SHA-256 collision. Briefly stated, if you find SHA-256 collisions scary then your priorities are wrong"
<victorbjelkholm> best quote I have regarding sha256 collisions
<mmuller> victorbjelkholm: lol. yep :-)
<achin> pfft, i eat mass-murdering space rocks for breakfast
<mmuller> it's a great concept for a breakfast cereal.
<ansuz> "mass-murdering space rocks are a part of a complete balanced breakfast"
<clever> i totally forgot about that fly-by on halloween
<clever> meant to watch the live telescope stream of it
<clever> bbl
<M-hrjet> That's a nice sounding quote.. but I don't buy the last bit about priorities.
<pjz> M-hrjet: if you're worried, then don't use IPFS. Simple.
<M-hrjet> I would like my filesystem to always return x, when I ask for x.
<clever> switching to a 512bit hash is simpler then stopping a mass-murder space rock
<pjz> M-hrjet: then give up on computers: components fail
<clever> even if your priority about chance is a bit off, one is still easyer to fix
<M-hrjet> Yeah, but I like to talk about it first, in the hope that designs and approaches are flexible.
<clever> bbl
<pjz> M-hrjet: it's upgradable in that IPFS is flexible enough to support multiple hash functions
<achin> you could configure your own ipfs node to use sha512 (well, you can't do this yet, but eventually you might be able to)
<M-hrjet> If the consensus is to not worry about it.. I will drop off right away.
<pjz> M-hrjet: so if tomorrow sha-256 is found to be fatally flawed, IFPS can be upgraded to somethign without that flaw
<M-hrjet> My worry is not that it is 256 bit.
<pjz> M-hrjet: then what's your worry?
<M-hrjet> My worry is that it is based on probability at all!
<pjz> M-hrjet: well then you should jsut drop off right away
<achin> are you also worried about your own data on a local filesystem?
<pjz> M-hrjet: or you should go read more about cryptographic hashes
<achin> (and if you aren't, why arn't you?)
<M-hrjet> my local filesystem, is not designed to rely on probabilities.
<M-hrjet> the underlying storage might be, but that's a different layer.
<pjz> M-hrjet: are you sure? what OS/filesystem are you running?
<achin> there is a non-zero probability that a cosmic ray will flip a bit on your harddisk, and this won't be detectable by your filesystem or OS
<M-hrjet> yeah, but that's a storage layer problem.
<pjz> M-hrjet: a mass-murderer space rock is 10^45 times more likely than a sha256 hash collision
<achin> if you are worried that ipfs doesn't do anything to protect against this rare event, then surely you should be worried that your filesystem doesn't do anything to protect against this rare event?
legobanana has joined #ipfs
<M-hrjet> Those are different layers. If a space rock hits earth, it affects IPFS as well. So it's not a concern about the filesystem, but about the storage layer.
<M-hrjet> Whereas, a hash collision in IPFS affects only IPFS.
<achin> do you want IPFS to protect against this somehow?
<pjz> M-hrjet: so IPFS is more likely to be disrupted by a space rock than a hash collision and you're worried about the _hash collision_ ?
<clever> achin: cosmic rays will impact your ram much more then they could impact your hdd
<achin> let's ignore the space rock for a moment
<achin> and let's ignore cosmic rays too, for the moment
<M-hrjet> pjz: I will say this only once more: probability of collisions is irrelevant for robustness.. it is only relevant for security.
<clever> achin: physicaly hot ram will also have a higher chance of flipping if the refresh rate isnt high enough
<achin> M-hrjet: what if i told you that there was nothing that could be done to reduce the chance of data corruption to zero percent?
<clever> ive seen a whole security talk on servers/phones having a single bit flip in a domain name, before the dns query got sent
<clever> by registering domains with bit flips, he was able to steal an abnormaly large amount of traffic
<achin> (clever: don't forget the really really crazy hacks about flipping ram bits to get write access to read-only pagefiles!)
<pjz> M-hrjet: so you're worried about malicious content?
<M-hrjet> This is going nowhere.. I am going to take a walk, cool off, and rethink.
<M-hrjet> Laterz.
<pjz> M-hrjet: note that malicious content is likely even more difficult to craft than a simple collision since it requires particular content _and_ a collision
<clever> achin: thats more of an active attack, but the dns bit flips can send you to foogle.com, complete with a valid foogle.com cert, when you click the google bookmark!!
water_resistant has joined #ipfs
<achin> yes, it's a very active attack. but it's not theoretical! it's real!
<achin> that someone was able to PoC this blows my mind
<clever> in one case he saw, the bit flip happened on the way to saving a file to disk
voxelot has joined #ipfs
voxelot has joined #ipfs
<clever> so the error got cached, and 1000's of cellphone users where sent to his test server
<achin> btw, i've not see the thing you are talked about. do you have a link to a paper?
<achin> M-hrjet: sorry if we were getting too wound up about this!
<clever> in another case, it was google downloading xml pages to describe active widgets on a page, it allowed JS injection into 100's of users
<clever> achin: let me see
<clever> bbl
<achin> thanks
amade has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<ipfsbot> [go-ipfs] diasdavid opened pull request #1938: update version (dev0.4.0...update-version-to-0.4.0) http://git.io/vlD15
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
mildred has quit [Ping timeout: 255 seconds]
<daviddias> just made a very (veeeery) simple module to add the commit signature to the github repo you are working on https://github.com/diasdavid/sign-commit , this might have little use, but when collaborating in open source projects with contributing guidelines, it saves you from finding the bash script to do that again :) hope this is useful for someone else :)
HoboPrimate has joined #ipfs
HoboPrimate has quit [Client Quit]
tilgovi has joined #ipfs
munkey has quit [Ping timeout: 265 seconds]
<victorbjelkholm> oh, nicely done daviddias, looks super simple
<daviddias> ahah I basically grabbed the shell script that was already available
<victorbjelkholm> could be installed automatically when a user does "npm install" from a project for example, right?
<daviddias> oh, right! good idea
<daviddias> but then again
<daviddias> this has to be voluntary signing
<daviddias> we don't want people to sign their commits if they don't want to
<achin> daviddias: :+1: on #1938
dignifiedquire has quit [Ping timeout: 255 seconds]
HoboPrimate has joined #ipfs
<cryptix> hellow again
sonatagreen has joined #ipfs
<victorbjelkholm> daviddias, is not a requirement to sign commits? Saw a PR that was denied and the guy left in rage because of requirements of signoff
<victorbjelkholm> if I remember correctly
<daviddias> it is required for us to accept
<daviddias> but it should be voluntary for a person to sign them
<daviddias> if I sneak the signature in post install script on each npm module
<daviddias> then I'm making a decision for you
M-hrjet has left #ipfs ["User left"]
<victorbjelkholm> Ah, I see what you mean now
dignifiedquire has joined #ipfs
<Keiya> I wonder if anyone has extracted the pixly fonts for Sans and Papyrus and shoved them in otfs yet.
<Keiya> Those were very nicely done
<Keiya> wait
<Keiya> what channel did I close, all my channels are off by one... sorry
Keiya has left #ipfs [#ipfs]
<achin> ok all, we now have to extract the pixly fonts for Sans and Papyrus and shove them in ipfs
<rschulman> hrjet does NOT like very small probabilities.
TheWhispery is now known as TheWhisper
<clever> achin: back
<clever> achin: seen the insanity that a single bit flip can cause?
ygrek has joined #ipfs
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
martinkl_ has joined #ipfs
forth_qc has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
M-matthew has quit [Quit: node-irc says goodbye]
M-matthew has joined #ipfs
martinBrown has quit [Quit: -]
cemerick has joined #ipfs
<noffle> good morning
<cryptix> hey noffle :)
<noffle> cryptix: how goes?
<nicolagreco> victorbjelkholm, the quote was so great
<whyrusleeping> goooood morning ipfstronauts
M-victorm has joined #ipfs
sseagull has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to refactor/transport: http://git.io/vlyii
<ipfsbot> go-ipfs/refactor/transport 2bb574c Jeromy: add timeout opt to transport dialer creation...
<noffle> o/
<cryptix> noffle: flujailed to the sofa but could be worse :)
<cryptix> ohi whyrusleeping :)
ygrek has quit [Ping timeout: 264 seconds]
martinBrown has joined #ipfs
fingertoe has joined #ipfs
s_kunk has quit [Ping timeout: 246 seconds]
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/log-hang from 3727f48 to ce81fb4: http://git.io/vlJ3U
<ipfsbot> go-ipfs/fix/log-hang ce81fb4 Jeromy: fix log hanging issue, and implement close-notify for commands...
<achin> clever: i've added that youtube to my watch-list playlist (i'll watch when i'm home from work)
Matoro has quit [Ping timeout: 260 seconds]
cemerick has quit [Ping timeout: 260 seconds]
simonv3 has joined #ipfs
* whyrusleeping fucked that rebase up hard
<whyrusleeping> thank you ipfsbot for logging the old hash for me
<sonatagreen> !botsnack
<pinbot> om nom nom
<whyrusleeping> that wasnt for you pb
<sonatagreen> oops
<whyrusleeping> lol
<rschulman> haha
<cryptix> whyrusleeping: git reflog?
<achin> turns out it's pretty hard to get fucked up beyond all repair in git
<cryptix> yup :)
<cryptix> i had nightmarish moments too before i knew of the reflog, thou
<achin> luckily i've never needed the reflog. i can normally detect the badness before my scrollback loses the key information
<whyrusleeping> cryptix: yeah, reflog works, but looking at irc for the old ref is easier :P
<cryptix> true but bad rebase is one of those things where its really nice
<whyrusleeping> especially since i had about four tries on the rebase, and wasnt sure exactly which ref was which
<cryptix> heh point taken
<whyrusleeping> git needs a 'reset to before i made bad decisions about rebasing this morning' command
<cryptix> achin: how is that irclog coming along? is the code that produces them somewhere onlne?
Matoro has joined #ipfs
<achin> cryptix: pretty well, issue #1925 blocked me for about 24 hours, but that's been fixed and i can continue work. at this point, it's mostly UI stuff
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/log-hang from ce81fb4 to d14c0fc: http://git.io/vlJ3U
<ipfsbot> go-ipfs/fix/log-hang c35710b Jeromy: vendor logging lib update...
<ipfsbot> go-ipfs/fix/log-hang d14c0fc Jeromy: fix log hanging issue, and implement close-notify for commands...
<cryptix> whyrusleeping: excuse my transport ramblings if they are bonkers - my thoughts are 'enhanced' by a metric f**ton of snott and goo in my head
<achin> cryptix: i'm not sure how much farther i want to take this, though. i'm not a UI designer, and i'm rapidly approaching the limit of my abilities
<whyrusleeping> cryptix: no, what you posted is what i would like to be able to write
<whyrusleeping> but it only really works on configuring concrete types :/
<cryptix> have you seen my 2nd comment?
<whyrusleeping> oh, not yet
<cryptix> achin: high five :) thats my problem too most of the time
<achin> cryptix: i will for sure share the source code, probably by weeks end
<cryptix> awesome!
<cryptix> i dug a little into the client JS and tried to format the timestamps to a formatted date but got lost in a rabbithole somewhere
<achin> (whyrusleeping: to give a more detailed answer to the question to asked me on github -- the tool that produced the ref with the unsorted links is this irclog-to-ipfs tool i'm working on)
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/log-hang from d14c0fc to 4cb826f: http://git.io/vlJ3U
<ipfsbot> go-ipfs/fix/log-hang 0500dcd Jeromy: vendor logging lib update...
<ipfsbot> go-ipfs/fix/log-hang 4cb826f Jeromy: fix log hanging issue, and implement close-notify for commands...
<whyrusleeping> cryptix: ah yeah, i was just thinking we could probably do something like your second comment
<whyrusleeping> although i'm not actually sure whats nicer at that point
* cryptix preferes smaller interfaces
<cryptix> but i must admit i dont see the full picture yet with libp2p
ygrek has joined #ipfs
<clever> achin: ah, kk
mvr_ has quit [Quit: Connection closed for inactivity]
Matoro has quit [Ping timeout: 260 seconds]
<whyrusleeping> cryptix: cross platform lib moving from ip:port to id:servicename
<kyledrake> I read in earlier chat that there were plans to auto-compress IPFS content?
<whyrusleeping> kyledrake: yeah, its been talked about
<kyledrake> Be careful with that. Whether that's better is very much a use-case question.
<kyledrake> Also you need to know when something is already compressed or it's a CPU hit
<kyledrake> (for no purpose)
<whyrusleeping> oh yeah, its a config option
Encrypt has joined #ipfs
<whyrusleeping> would be*
<kyledrake> Global or per-file?
<whyrusleeping> not sure yet
<kyledrake> err per-object
<whyrusleeping> but i do have to say that my filesystem uses lz4 compression by default globally
<kyledrake> cool. Just passing web hosting geezer advice along ;)
<whyrusleeping> and i barely notice any cpu hit
<achin> compression also has an impact on deduplication
<whyrusleeping> achin: its compression *after* dedupe
<achin> if you take a chunk of data, and compress it with gzip level 4, the result might be different than if itwas compressed with gzip level5. it's just another way for the same data to have different hashes
<cryptix> modern compression like snappy should help with overhead over already compressed data
<achin> (but there *for sure* are use cases when you want to compress. i think it should be an option to ipfs-add)
<kyledrake> Yeah, that's the other thing. Compressing it changes the nature of the hash.
<kyledrake> It's a different file.
<cryptix> 19:37 <@whyrusleeping> achin: its compression *after* dedupe
<whyrusleeping> ^
<whyrusleeping> important
<cryptix> so the hashes are still for the uncompressed data
<cryptix> just stored compressed afaict
wtbrk has joined #ipfs
<achin> the hashes are for the uncompressed data?
<whyrusleeping> yep. the compression would be done by the datastore between calling 'Put(data)' and actually writing to the disk
<kyledrake> Would patching a very large object require a decompress to compute the new hash then?
<achin> ooh, hmm
<whyrusleeping> kyledrake: it would require decompressing the patched areas
Matoro has joined #ipfs
<kyledrake> Pondering the implications of that on large sets of data vs decompression is a good idea. If it's basically the same then ignore me ;)
<achin> i was thinking about compressing just the data in the PBNode.Data field. are there any advantages to this? i can think of a few contrived ones, i think
<kyledrake> s/decompression/not compressing the data
<multivac> kyledrake meant to say: Pondering the implications of that on large sets of data vs not compressing the data is a good idea. If it's basically the same then ignore me ;)
<cryptix> looks like we got another entry for awesome-ipfs: https://diginomics.com/interview-with-kyle-drake-juan-benet-on-the-ipfs/
Matoro has quit [Ping timeout: 240 seconds]
<achin> neato
HoboPrimate has quit [Remote host closed the connection]
sstangl_ is now known as sstangl
<cryptix> kyledrake: objections about putting your "http is obsolete" post into the articles as well?
<kyledrake> cryptix go for it
<whyrusleeping> achin: compressing the data in the dag nodes is not recommended. that breaks deduplication.
<cryptix> done :]
HoboPrimate has joined #ipfs
pfraze has quit [Remote host closed the connection]
<achin> whyrusleeping: yeah, that was my concern
sstangl is now known as sspookl
<daviddias> whyrusleeping: seems that I'll be needing that 0.3.9 rebase on 0.4.0 sooner after all
<daviddias> not getting my ipfs-blob-store to pass the tests with the new js-ipfs-api against 0.4.0
Encrypt has quit [Quit: Quitte]
mvr_ has joined #ipfs
<whyrusleeping> daviddias: wanna help review my latest PR?
<daviddias> sure ! :)
anticore has joined #ipfs
<whyrusleeping> i can rebase as soon as that PR merges (even without utp)
<whyrusleeping> cryptix: if youre alive enough to PR, would love some on 1937
<whyrusleeping> ion: you too!
Matoro has joined #ipfs
<daviddias> whyrusleeping: what would be your definition of swarm?
<daviddias> I've heard feedback from Juan back in August that I had a different perception than what it is implemented in go-ipfs, wanna know if it still applies
<cryptix> whyrusleeping: i'll make a 'wtf is this' pass as i still lack insight into libp2p' design
<whyrusleeping> daviddias: i really dont know. there are two different 'swarm' things
<whyrusleeping> daviddias: the entire thing is going to be refactored pretty good once it gets ripped out
<whyrusleeping> so really, what i'm looking for is 'this probably wont kill orphans' or something
<daviddias> got it
<daviddias> loool
<daviddias> was not looking with the right perspective then
<whyrusleeping> like, definitely comment on things that will need to be changed after its pulled out, but make sure to say what should be done now vs later
voxelot has quit [Ping timeout: 268 seconds]
<ipfsbot> [go-ipfs] whyrusleeping created feat/version-repo (+1 new commit): http://git.io/vl9c3
<ipfsbot> go-ipfs/feat/version-repo 7811983 Jeromy: add option to version command to print repo version...
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1940: add option to version command to print repo version (master...feat/version-repo) http://git.io/vl9c2
Tv` has joined #ipfs
Eudaimonstro has quit [Remote host closed the connection]
<ipfsbot> [js-ipfs-api] VictorBjelkholm opened pull request #102: Add current work in progress of readme.md and api.md (master...better-readme-and-api-docs) http://git.io/vl9Bt
<victorbjelkholm> richardlitt, sorry for taking so long time but here: https://github.com/ipfs/js-ipfs-api/pull/102
<ion> whyrusleeping: I’m afraid I’m not familiar at all with that part of the IPFS code. I looked at the diff and I don’t understand the initial or the final code well enough to make comments.
atrapado has joined #ipfs
<whyrusleeping> ion: ah, thanks anyways
fingertoe has quit [Ping timeout: 246 seconds]
simonv3 has quit [Quit: Connection closed for inactivity]
Eudaimonstro has joined #ipfs
sonatagreen has quit [Ping timeout: 265 seconds]
pfraze has joined #ipfs
forth_qc has quit [Quit: Lämnar]
rendar has quit [Ping timeout: 255 seconds]
<victorbjelkholm> What about changing the name OpenIPFS to PinCoop?
<victorbjelkholm> Ideas?
rendar has joined #ipfs
<achin> i think it's a good idea
<achin> "openIPFS" sounds like it's an implementation of IPFS
<victorbjelkholm> yeah, true, a point that have been raised many times now
<ion> Make the logo look like a chicken coop with pins inside.
<ion> achin: Yeah, and also like the other implementations are not open.
<victorbjelkholm> ion, a chicken coop?! What is that?
<victorbjelkholm> yeah, that's my biggest concern with the current name, implies ipfs is not already open
voxelot has joined #ipfs
<achin> hehe, i like the logo idea
<ion> whyrusleeping, jbenet: While adding sharness tests for the Cache-Control headers, I’m having ipns name publish fail with “Error: failed to find any peer in table”. Any ideas?
<multivac> [WIKIPEDIA] Chicken coop | "A chicken coop or hen house is a building where female chickens are kept. Inside hen houses are often nest boxes for egg-laying and perches on which the birds can sleep, although coops for meat birds seldom have either of these features.A chicken coop usually has an indoor area where the chickens sleep..."
<ion> Thanks multivac
<victorbjelkholm> haha, let's see if I can nail a logo like that. Usually my logos are text in different colors, not with actual things in them. But good suggestion nonetheless
<achin> where would be we without multivac
<whyrusleeping> ion: what does your test look like?
<victorbjelkholm> achin, in this case, wikipedia
<victorbjelkholm> otherwise, doomed
<ion> whyrusleeping: It’s this existing test which has been made test_expect_failure: https://github.com/ipfs/go-ipfs/blob/master/test/sharness/t0110-gateway.sh#L61
<ion> I’d like to have it succeed and check its resulting headers.
<whyrusleeping> huh, wonder why its failing?
<achin> ion: do you have another daemon running at the same time? could that matter?
<whyrusleeping> probably because there arent any other nodes in the network
<ion> achin: I don’t, I have stopped it.
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
Encrypt has joined #ipfs
<noffle> ipfs doesn't use different hashing for leaf nodes vs internal nodes, does it? are we vulnerable to this attack? http://crypto.stackexchange.com/questions/2106/what-is-the-purpose-of-using-different-hash-functions-for-the-leaves-and-interna
<noffle> we also don't (seem?) to store block size in the hash, which'd also prevent it
<victorbjelkholm> dammit, again it bite me that .cat is return an object sometimes and a stream sometimes...
chriscool has quit [Ping timeout: 250 seconds]
<noffle> basically an attacker can replace an internal node with a leaf node that hashes to the same thing
<noffle> since the merkle dag can't tell the difference unless it's encoded in the hash itself, or the subgraph size is known
<achin> in ipfs, it is impossible for a non-leaf node and a leaf-node to have the same hash
<ion> noffle: I’m failing to understand how they would hash to the same thing if they are different. And if they are not different, how does it matter?
<ion> If you could just generate hash collisions, wouldn’t the hash function be unusable for IPFS in the first place?
hellertime has quit [Quit: Leaving.]
<ion> If get(hash) is guaranteed to always return the same unique data, how does it matter if the data happens to have (or even purely consist of) links elsewhere or not?
dd0 has quit [Quit: dd0]
anticore has quit [Ping timeout: 264 seconds]
<noffle> ion: you're raising really good questions :) I'm still reading / grokking it myself; give me a bit more time to answer
krl has quit [Ping timeout: 250 seconds]
nekomune has quit [Ping timeout: 250 seconds]
krl has joined #ipfs
ygrek has quit [Ping timeout: 250 seconds]
martinkl_ has quit [Quit: Textual IRC Client: www.textualapp.com]
nekomune has joined #ipfs
dysbulic has joined #ipfs
atrapado has quit [Quit: Leaving]
<noffle> ion: okay, my understanding is that if I had built a merkle dag where a subgraph is made of documents A and B, then the hash stored in their parent would be H(H(A) + H(B)). however, I could just make a single document whose data is H(A) + H(B). so if someone gave me this subgraph and I tried to validate it, it'd pass (same root hash), but I'd be missing the original documents A and B
<lgierth> victorbjelkholm: i think pincoop is a good name
<achin> noffle: it doesn't quite work like that in IPFS
<achin> the hash of a node is closer to H(links + data)
<achin> so you'd be able to distinbuish between H( H(A) + H(B) + H(empty_data)) and H( H(no_links) + H(H(A) + H(B)))
<achin> (hopefully that mess of parentheses made sense)
Matoro has quit [Ping timeout: 240 seconds]
OutBackDingo has quit [Quit: Leaving]
dysbulic has quit [Ping timeout: 252 seconds]
<noffle> achin: ah ha! I see. looking at coding.go, we hash the protobuf of the node, which includes link data
<noffle> \o/
<achin> try looking at merkledag.proto
<achin> that might be clearer
<achin> but yes, that's exactly right -- we hash the protobuf encoded version of the PBNode
<noffle> got it. the protobuf structure implicitly encodes whether there are children or not
OutBackDingo has joined #ipfs
<noffle> that's great -- thanks!
<achin> no problem!
wtbrk has quit [Ping timeout: 250 seconds]
<ion> Even if the object pointing to A and B was literally H(A) + H(B) (and thus would be hashed as H(H(A) + H(B))), I don’t see the problem. If someone made a single document whose data is H(A) + H(B), they just replicated the object. So there’s one more node serving that specific object? Great.
<noffle> ion: no, it would be an object whose data is H(A) + H(B). so they could replicate a merkle dag that's actually data-different from the original
<noffle> (er, ignore first sentence; you said that assertion too)
<noffle> but you actually end up with two different dags. it's just that the algorithm for hashing subgraphs couldn't tell the different between a single document with value V or N children that hash to V
<ion> Ah, that I get. It’s about having to encode the difference about links and data somewhere. Sure, absolutely. But they were talking about collisions and preimage attacks which lead me to think there’s more than that to it.
rombou has joined #ipfs
water_resistant has quit [Quit: Leaving]
chriscool has joined #ipfs
<noffle> it's possible. the above was the best consistent understanding I was able to suss out of it
ygrek has joined #ipfs
border0464 has quit [Ping timeout: 240 seconds]
water_resistant has joined #ipfs
border0464 has joined #ipfs
mvr_ has quit [Quit: Connection closed for inactivity]
tilgovi has quit [Ping timeout: 240 seconds]
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
OutBackDingo_ has quit [Remote host closed the connection]
HoboPrimate has quit [Ping timeout: 260 seconds]
HoboPrimate has joined #ipfs
chriscool has quit [Quit: Leaving.]
rombou has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
rombou has joined #ipfs
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
pfraze has quit [Remote host closed the connection]
e-lima has quit [Ping timeout: 240 seconds]
amade has quit [Quit: leaving]
s_kunk has joined #ipfs
<whyrusleeping> this is a weird test failure: https://travis-ci.org/ipfs/go-ipfs/jobs/89064042
Encrypt has quit [Quit: Sleeping time!]
<ion> voxelot: Yeah. I’m predicting it will be closed-source. I’ll be happy if I’m proven wrong though.
<voxelot> right, so hush hush about the details
<achin> "MegaNet will actually rely on the unused processing power of people's smartphones and laptops.
<achin> "
<achin> MegaNet@home
<ion> Yes, all the unused battery capacity in people’s smartphones.
rombou has quit [Ping timeout: 240 seconds]