whyrusleeping changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- code of conduct at https://github.com/ipfs/community/blob/master/code-of-conduct.md -- sprints + work org at https://github.com/ipfs/pm/ -- community info at https://github.com/ipfs/community/
domanic has joined #ipfs
Eudaimonstro has quit [Ping timeout: 245 seconds]
ralphtheninja has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike> not bad, all Bs
<lgierth> not good enough
<spikebike> it's not hard, just requries a few weaks
<spikebike> looks like you just have to append the startcom intermedia cert to the original cert
<lgierth> yeah doing that right now
<lgierth> also enabling ocsp stapling
besenwesen has quit [Ping timeout: 250 seconds]
<whyrusleeping> lgierth: any interest in looking over some hellabot PR's?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
voxelot has quit [Ping timeout: 245 seconds]
besenwesen has joined #ipfs
besenwesen has quit [Changing host]
besenwesen has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping force-pushed feat/mfs from 1517de6 to ded09d5: http://git.io/vOWXQ
<ipfsbot> go-ipfs/feat/mfs db39682 Jeromy: WIP: implement mfs API...
<ipfsbot> go-ipfs/feat/mfs 0b22529 Jeromy: make ipns fuse mount use mfs...
<ipfsbot> go-ipfs/feat/mfs 5f3dcee Jeromy: move session option up to root...
Eudaimonstro has joined #ipfs
<rschulman> evening folks
akhavr has quit [Ping timeout: 260 seconds]
<whyrusleeping> rschulman: evenin!
Eudaimonstro has quit [Ping timeout: 240 seconds]
<rschulman> whyrusleeping: How are things?
ygrek_ has joined #ipfs
ygrek has quit [Ping timeout: 256 seconds]
ygrek_ has quit [Ping timeout: 246 seconds]
Leer10 has joined #ipfs
<whyrusleeping> rschulman: theyre alright
<whyrusleeping> tired
<whyrusleeping> gonna play some video games with my roommate tongiht
chriscool has joined #ipfs
Leer10 has quit [Ping timeout: 244 seconds]
Ler10 has joined #ipfs
Ler10 has quit [Ping timeout: 250 seconds]
Leer10 has joined #ipfs
thefinn93 has quit [Ping timeout: 260 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
thefinn93 has joined #ipfs
cSmith has quit [Ping timeout: 252 seconds]
bigle has quit [Ping timeout: 264 seconds]
Igel has joined #ipfs
cSmith has joined #ipfs
bedeho has quit [Remote host closed the connection]
substack has quit [Ping timeout: 256 seconds]
bedeho has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth> whyrusleeping: hey, can take a look tomorrow, i'm about to head to bed. how can i help?
substack has joined #ipfs
<whyrusleeping> lgierth: just wanted a little more input on the PR thats open
Leer10 has quit [Ping timeout: 265 seconds]
notduncansmith has joined #ipfs
rjeli has quit [Ping timeout: 244 seconds]
notduncansmith has quit [Read error: Connection reset by peer]
Leer10 has joined #ipfs
<jbenet> !pin /ipfs/QmVYijdfMK3sBH4ncY4rDsZ2KtGyAF2vi46cxbYvBwC8od
<pinbot> now pinning /ipfs/QmVYijdfMK3sBH4ncY4rDsZ2KtGyAF2vi46cxbYvBwC8od
rjeli has joined #ipfs
<jbenet> thanks for taking care of SSL \o/
therealplato has quit [Quit: Leaving.]
<jbenet> !pin /ipfs/QmX5SScJsbuGT8TfGkP3UNPpAaJZ2UXqRunvPsemN8oQ3V/cap.png
<jbenet> !pin /ipfs/QmX5SScJsbuGT8TfGkP3UNPpAaJZ2UXqRunvPsemN8oQ3V
<pinbot> [host 3] failed to grab refs for /ipfs/QmVYijdfMK3sBH4ncY4rDsZ2KtGyAF2vi46cxbYvBwC8od: Post http://[fc4e:5427:3cd0:cc4c:4770:25bb:a682:d06c]:5001/api/v0/refs?arg=%2Fipfs%2FQmVYijdfMK3sBH4ncY4rDsZ2KtGyAF2vi46cxbYvBwC8od&enc=json&r=true&stream-channels=true: dial tcp [fc4e:5427:3cd0:cc4c:4770:25bb:a682:d06c]:5001: connection timed out
<pinbot> now pinning /ipfs/QmX5SScJsbuGT8TfGkP3UNPpAaJZ2UXqRunvPsemN8oQ3V/cap.png
<pinbot> now pinning /ipfs/QmX5SScJsbuGT8TfGkP3UNPpAaJZ2UXqRunvPsemN8oQ3V
vijayee_ has joined #ipfs
<jbenet> hey substack i remember a keyboot app you made-- i'm thinking about caps and that sort of stuff.
<jbenet> i remember you made it on top of hyperboot
<jbenet> (it looked different)
<substack> I've only built example apps with it
<substack> but I'm working on something similar for williamcotton's company right now
gordonb has joined #ipfs
<jbenet> substack is it open source?
<substack> it's not public yet but it will be
<substack> bitcoin wallet for identity using all the same tricks as keyboot
Leer10 has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has quit [Remote host closed the connection]
<ipfsbot> [go-ipfs] lgierth created directory-listing-css (+1 new commit): http://git.io/vGLmF
<ipfsbot> go-ipfs/directory-listing-css f375ade Lars Gierth: gateway: embed directory-listing styles...
kord has quit [Read error: Connection reset by peer]
kord has joined #ipfs
vonzipper has quit [Ping timeout: 246 seconds]
vonzipper has joined #ipfs
machrider has quit [Ping timeout: 246 seconds]
rjeli has quit [Ping timeout: 246 seconds]
silotis has quit [Ping timeout: 246 seconds]
silotis has joined #ipfs
<substack> daviddias: muxer.attach(transport, isListener) is a kind of un-nodelike api here https://github.com/diasdavid/abstract-stream-muxer
<substack> better to have the muxer return a duplex stream for these reasons: https://github.com/dominictarr/rpc-stream#rant
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rjeli has joined #ipfs
<substack> I guess that's in order to talk to lower-level interfaces, but meh
machrider has joined #ipfs
<jbenet> substack: may have been a result of keeping impls similar cross lang: https://github.com/jbenet/go-stream-muxer
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<substack> go-ipfs seems to have issues on npm https://gist.github.com/substack/117656da9e8dcb15d8cf
Leer10 has joined #ipfs
<ipfsbot> [go-ipfs] lgierth opened pull request #1615: gateway: embed directory-listing styles (master...directory-listing-css) http://git.io/vGLWr
<jbenet> substack: uname -a ?
<jbenet> substack: i just installed it fine in darwin-amd64 and linux-amd64
* lgierth zzz
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
kord has quit [Quit: Leaving...]
<substack> amd64
<substack> npm install -g go-ipfs@latest worked
<substack> the pre-release tags messed it up I think
<jbenet> substack: interesting. wonder why it worked for me :/ -- the prerelease tags are there cause i screwed something up in the module and i want to peg the versions of go-ipfs npm module to the versions of the go-ipfs binaries
<jbenet> !pin /ipfs/Qmf82jUC9ZuoSTCNY55hyx3HmiDed3WnhFD5PC7CTSPmC2
<pinbot> now pinning /ipfs/Qmf82jUC9ZuoSTCNY55hyx3HmiDed3WnhFD5PC7CTSPmC2
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<pinbot> [host 3] failed to grab refs for /ipfs/Qmf82jUC9ZuoSTCNY55hyx3HmiDed3WnhFD5PC7CTSPmC2: Post http://[fc4e:5427:3cd0:cc4c:4770:25bb:a682:d06c]:5001/api/v0/refs?arg=%2Fipfs%2FQmf82jUC9ZuoSTCNY55hyx3HmiDed3WnhFD5PC7CTSPmC2&enc=json&r=true&stream-channels=true: dial tcp [fc4e:5427:3cd0:cc4c:4770:25bb:a682:d06c]:5001: connection timed out
sharky has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sharky has joined #ipfs
mildred has joined #ipfs
xhs has joined #ipfs
xhs has left #ipfs [#ipfs]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
patcon has joined #ipfs
Eudaimonstro has joined #ipfs
akhavr has joined #ipfs
akhavr has quit [Ping timeout: 264 seconds]
akhavr has joined #ipfs
zabirauf has joined #ipfs
<ipfsbot> [go-ipfs] rht pushed 2 new commits to check-for-api: http://git.io/vGLwO
<ipfsbot> go-ipfs/check-for-api e1d8200 Christian Couder: test-lib: use all the test_launch_ipfs_daemon() arguments...
<ipfsbot> go-ipfs/check-for-api 54bd1bd rht: Fix test cases for ipfs api check...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Ping timeout: 255 seconds]
mildred has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
zabirauf has quit [Max SendQ exceeded]
akhavr has quit [Ping timeout: 272 seconds]
rht___ has joined #ipfs
<rht___> jbenet: 2 more cases for check-for-api
bedeho has quit [Ping timeout: 252 seconds]
<ipfsbot> [go-ipfs] rht force-pushed check-for-api from 54bd1bd to be96d8d: http://git.io/vGeei
<ipfsbot> go-ipfs/check-for-api be96d8d rht: Fix test cases for ipfs api check...
mildred1 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
azm_ has quit [Ping timeout: 246 seconds]
azm_ has joined #ipfs
domanic has quit [Quit: Leaving]
domanic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Eudaimonstro has quit [Read error: Connection reset by peer]
chriscool has quit [Read error: No route to host]
domanic has quit [Ping timeout: 246 seconds]
chriscool has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dignifiedquire has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tjgillies__ has quit [Quit: Connection closed for inactivity]
Tv` has quit [Quit: Connection closed for inactivity]
inconshreveable has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot> [go-ipfs] rht opened pull request #1617: Mocknet: use explicit LinkAll() & ConnectAll() (master...rm-full-mesh-linked) http://git.io/vGtTQ
domanic has joined #ipfs
<cryptix> hey ipfs
mildred1 has quit [Quit: Leaving.]
mildred1 has joined #ipfs
rht___ has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<mildred> hellp ipfs, anyone to discuss linked data here
amstocker_ has joined #ipfs
<cryptix> mildred: hey :) sorry (still...) not an expert
<cryptix> (on LD)
<mildred> jbenet: you're here?
amstocker has quit [Ping timeout: 244 seconds]
patcon has quit [Ping timeout: 255 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
amstocker_ has quit [Ping timeout: 246 seconds]
<davidar> mildred: ask your question, somebody might be able to answer
<davidar> mildred: otherwise jbenet is usually here in a few hours
<mildred> ok, I don't have specific questions, just that I'm thinking a lot about Linked Data and I think this needs more definition
<davidar> mildred: I might be able to answer general questions, but I don't know much about the specifics of IPLD
<mildred> I just discovered IPLD, and I am wondering how it relates to IPFS. Is it designed to replace the IPFS object format ?
<davidar> mildred: I believe that's the plan, yes
<mildred> don't you think that using a JSON format (even if binary) would make it slow?
inconshreveable has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<davidar> mildred: I think CBOR is supposed to be fast to parse, but tbh I'm not exactly sold on it either
misalias is now known as prosodyContexte
prosodyContexte is now known as prosodyContext
prosodyContext is now known as misalias
<jbenet> Hey mildred o/ -- cbor is fast, as fast as protobuf. It's a bit less space efficient, but gzip may put that to the test.
misalias is now known as mistake
<mildred> jbenet: cbor is the format implemented in go-ipld ?
mistake is now known as misaim
<davidar> jbenet: I kind of like how protobuf separates the schema from the data :)
<jbenet> The goal of moving to JSON-LD compatible representation (stored in cbor) is mainly about exposing a json data model to the end user, because it's way easier to use for applications on top
misaim is now known as misewmn
<mildred> I don't doubt that in itself it is fast, but for following links of a unixfs directory, you need to traverse two objects (and follow two string keys instead of one: the file name)
<jbenet> davidar: yeah there's space benefit to that for sure. And validation. but also costs of use.
<mildred> I'm don't like so much string keys, but it's not a so big price to pay.
<davidar> jbenet: how does it compare to self-describing messages in protobuf? https://developers.google.com/protocol-buffers/docs/techniques?hl=en#self-description
misewmn is now known as miswear
<jbenet> mildred cbor is implemented in go-ipld yes, and why the double string key?
<mildred> what is that CBOR format ? you have a link ?
<mildred> ok
<jbenet> cbor.io I think
miswear is now known as miswism
<mildred> jbenet: you jave to follow object[fname]["mlink"] to get to the link (according to your examples)
<mildred> s/jabe/have/
<jbenet> It's a new compact json made primarily for IoT things, and some crypto applications
<jbenet> mildred ah yes you're right.
<mildred> and there is another problem, this unixfs format is not compatible with JSON-LD. A compatible format would make it harder to traverse knowing an entry name
<mildred> perhaps we can provide a JSON compatible format to the user, but internally, in binary storage, optimize locating objects in lists using an index key
<jbenet> mildred what do you mean not compatible? You mean the "added values not defined in context" problem?
<jbenet> Well, actually obj[fname]["hash"]
<mildred> JSON-LD compatible:
<mildred> {entries: [
<mildred> {name: bar.txt, mlink: HASH}
<mildred> {name: foo.txt, mlink: HASH}
<mildred> ]}
<mildred> JSON-LD incompatible:
<mildred> {
<mildred> foo.txt: {mlink: HASH}
<mildred> bar.txt: {mlink: HASH}
<mildred> }
<mildred> yes, that's the problem
<mildred> *keys in JSON-LD must be RDF predicates*
<jbenet> mildred yeah we could. One important thing though is making it easy for people to reproduce the formats and canonical serialization to compare the hashes
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<davidar> jbenet: my other issue is, since cbor is newish, what happens if it doesn't take hold? (like whatever facebooks competitor to protobuf was)
<jbenet> mildred why can't entries be a map? Not grasping it atm
<jbenet> davidar it's not a big problem as parsers are pretty simple. The important thing to me is providing a super simple data model the the user
<mildred> the issue is that JSON-LD consider that json keys are predicates that link the subject to an object
<mildred> a file name is not a predicate, it's the object of the unixfs:filename property/predicate
<jbenet> mildred there seems to be more restriction to json-ld than I thought before
<mildred> we have to choose: either be compatible with JSON-LD as it is now and we can't use filenames in maps
<jbenet> Hm :/
<mildred> or roll our own (and maybe push for inclusion in a later JSON-LD spec, or maybe not)
<jbenet> I guess we could do the entries array of objects, it is annoying though.
<jbenet> Well the worry I have is how much this is going to bog down users. I think we should be able to provide a "it's just json" data model interface to people.
<mildred> We can imagine an extension to the current JSON-LD if we want
<mildred> but then we need to define clearly what we want with no ambiguity
<jbenet> Yeah we could. And discussed that with dlongley yesterday
<jbenet> mildred very much agreed.
<mildred> jbenet: in any case, it's just JSON. We are not forced to parse it using linked data. But for our own formats, it'd be better to use LD though
<jbenet> Yeah LD is a win. Maybe we can come up with a simple way to reconcile this, JSON-LD-lite type thing
<davidar> jbenet: something I was wondering (possibly stupid :): insteading of explicitly linking to hashes, could the conceptual model be "one huge json object" which gets transparently broken into objects with pointers (hashes)
<mildred> If we want to use filenames in JSON keys, I think the context would need to contain a specific directive to parse the JSON differently, telling that the keys are not predicates. We would need also to remove the ambiguity in case we have a file starting with @
<daviddias> substack: I started with muxer.attach so that the API from between go node was similar. mafintosh also brought the point of making it a duplex stream and proposed that to Indutny, here - https://github.com/indutny/spdy-transport/issues/14
domanic has quit [Ping timeout: 250 seconds]
<mildred> davidar: I think it's better to do that way. We don't want to have RDF objects called links. I would prefer RDF objects called files, directories or entries
<mildred> jbenet: There is also the option to use RDF internally and use JSON-LD to present the internal RDF format to the users
<jbenet> davidar exactly!!! \o/ that was a big TBL moment earlier this summer
<jbenet> daviddias was there.
<davidar> jbenet: the other benefit of that approach is, if you had a huge dictionary that couldn't fit into a single ipfs object (like a fulltext index), you could transparently encode it as a distributed trie
<daviddias>
<jbenet> Yep exactly. I mean IPFS as is already lets you do that, but you have to do a bit more work for it
<davidar> jbenet: yeah, but it'd be cool if were a core feature of the format :)
<davidar> jbenet: also, TBL?
<jbenet> mildred I don't think RDF is particularly fast, and very space inefficient. Though might also gzip well. Idk in this I think it's best to go with non SW/LD formats that are well established
<jbenet> davidar yeah, transparent traversal is nice, but it's also hard to know when to do it-- sometimes you may want just the partial object
<drathir> mornin...
<mildred> jbenet: I wans't thinking of RDF XML serialization, but a RDF model that we can implement the way we want. For example my last message in https://github.com/ipfs/go-ipld/issues/1
<mildred> Implement RDF model in protocol buffers
<davidar> jbenet: you could do both pretty easily, no?
<davidar> the partial object would be the "low level" interface
<drathir> davidar: ah... but still thats nice...
<jbenet> mildred interesting. It gets harder and harder to use the raw data for end users though. A benefit of cbor is it's just one standard lib serialization step from nice, familiar json
<jbenet> (Thinking of webapp devs here)
<jbenet> davidar yeah can definitely be done
<davidar> jbenet: would make it really easy to build a search engine webapp on ipfs :)
<mildred> Using a JSON format for presentation is not that silly, but we would need to fix two things I believe:
<mildred> - extend JSON-LD to use arbitrary keys in maps (this could benefit other projects as well)
<mildred> - make it possible to access data in binary format more quickly than dereferencing a string in a JSON object
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<davidar> jbenet: well, any webapp that would traditionally need a big backend database for that matter
<mildred> Looking at this http://www.w3.org/TR/json-ld/#sets-and-lists I think we could declare in the context "@container":"@map" as an extension and use file names as json keys
<mildred> Or better yet, we have @index: http://www.w3.org/TR/json-ld/#data-indexing
<mildred> already in the spec
<mildred> This enable us to use file names as keys !!! yay
<jbenet> mildred nice finds-- know if there's a way to reset the whole doc and put it in a full "@index" mode?
<jbenet> (i.e. i dont think the @index there extends beyond the one level it specifies (post)
<jbenet> davidar: yeah for sure
<mildred> If you link the context eparately (not from within JSON but via HTTP Link header or specific entry in our binary format) then I think you can put @container:@index at the top of the context
<mildred> I could try a pull request on ipld with that
<davidar> ok, I might hack together a prototype, I'm feeling pretty pumped about everything-is-a-distributed-radix-tree :)
<jbenet> mildred: so it may be that there is some @context file that acts as a sort of "html5 or css shiv" to "reset" the defaults to what we want them to be
<mildred> that would work for one level only. Further down, you must still specify objects with known keys (or a unknown @vocab)
<jbenet> mildred: and maybe, there is a small subset of JSON-LD that is just JSON and that we can define a 1:1 mapping to regular JSON-LD
<mildred> What's the problem is a json file is not compatible with JSON-LD ? If the designer of the JSON file didn't want linked data, he would be fine without
<mildred> The user can choose the format he wants. Wants JSON-LD, we can provide (and if the user wanted JSON-LD, that's because there was a properly designed @context provided). If the user wants just JSON, we can also provide
<mildred> If the user wants our special binary format, we can also provide
<jbenet> mildred: oh because all objects in ipfs will carry our context and thus be JSON-LD
<mildred> jbenet: but can be interpreted as plain JSON if needed
silotis has quit [*.net *.split]
thefinn93 has quit [*.net *.split]
oleavr has quit [*.net *.split]
dandroid has quit [*.net *.split]
step21 has quit [*.net *.split]
bren2010 has quit [*.net *.split]
chriscool has quit [*.net *.split]
azm_ has quit [*.net *.split]
machrider has quit [*.net *.split]
rjeli has quit [*.net *.split]
okket has quit [*.net *.split]
sbruce has quit [*.net *.split]
mg- has quit [*.net *.split]
mondkalbantrieb has quit [*.net *.split]
warner has quit [*.net *.split]
joeyh has quit [*.net *.split]
Bat`O has quit [*.net *.split]
<jbenet> its easier to reason about the data format in general if users (including implementors) only have one transform between their datastructure (or json) -> wire format.
<mildred> right
<jbenet> mildred yeah dlongley said the same thing. what i worry about there is segregating the two use cases. i think it is possible to have a JSON-LD like thing that's strictly incrementally added to proper json. i.e. instead of making the defaults be RDF, make the defaults be regulars JSON and only enter RDF-like semantics when explicitly requested
<mildred> jbenet: if the JSON data is not thought in the context of linked data, what would be the @context like? Would we provide a sane default? And with JSON-LD data (such as unixfs) we would provide a unixfs @context?
chriscool has joined #ipfs
azm_ has joined #ipfs
machrider has joined #ipfs
rjeli has joined #ipfs
okket has joined #ipfs
Bat`O has joined #ipfs
mondkalbantrieb has joined #ipfs
joeyh has joined #ipfs
mg- has joined #ipfs
sbruce has joined #ipfs
warner has joined #ipfs
silotis has joined #ipfs
thefinn93 has joined #ipfs
oleavr has joined #ipfs
dandroid has joined #ipfs
step21 has joined #ipfs
bren2010 has joined #ipfs
<mildred> jbenet: easy: if you want RDF, provide a context, if you don't, don't provide a context. That's all or nothing though
notduncansmith has joined #ipfs
silotis has quit [*.net *.split]
thefinn93 has quit [*.net *.split]
oleavr has quit [*.net *.split]
dandroid has quit [*.net *.split]
step21 has quit [*.net *.split]
bren2010 has quit [*.net *.split]
notduncansmith has quit [Read error: Connection reset by peer]
<mildred> But with @container: @index, a good portion of JSOn documents should be compatible with a schema designed for if in an after thought.
<jbenet> 12:28 <•jbenet> i think its easier to reason about the data format in general if users (including implementors) only have one transform between their datastructure (or json) -> wire format.
<jbenet> 12:28 <•jbenet> mildred yeah dlongley said the same thing. what i worry about there is segregating the two use cases. i think it is possible to have a JSON-LD like thing that's strictly incrementally added to proper json. i.e. instead of making the defaults be RDF, make the defaults be regulars JSON and only enter RDF-like semantics when explicitly
<jbenet> requested
jbenet_ has joined #ipfs
jbenet_ is now known as jbenet_irssi
<jbenet_irssi> 12:28 <@jbenet> i think its easier to reason about the data format in general if users (including implementors) only have one transform between their datastructure (or json) -> wire format.
<jbenet_irssi> 12:28 <@jbenet> mildred yeah dlongley said the same thing. what i worry about there is segregating the two use cases. i think it is possible to have a JSON-LD like thing that's strictly incrementally added to proper json. i.e. instead of making the defaults be RDF, make the defaults be regulars JSON and only enter RDF-like semantics when explicitly requested
silotis has joined #ipfs
thefinn93 has joined #ipfs
step21 has joined #ipfs
bren2010 has joined #ipfs
oleavr has joined #ipfs
dandroid has joined #ipfs
<mildred> We could provide a default @context that would extend the JSON-LD format and that would probably be considered invalid my JSON-LD parsers and thus might dump the original JSON unmodified. We could think for example of:
<mildred> @context: {@id: @index}
<jbenet_irssi> ((( i think this segregation is really bad for users, it creates a bimodal world where you either DONT use LD at all , or must read lots of specs. i deally we could have a world with incremental improvement where the data is _just json_ and @context directives introduce the RDF semantics )))
<jbenet_irssi> hmmm yeah possibly.
<mildred> jbenet_irssi: What I don't understand is what is the problem if the context does not describe all the data, and there is data left out of the model. The data won't be included in the RDF model but would still be available in JSON, unless it is stripped somehow
<mildred> Do we need to run the JSON-LD processor ourselves ?
<jbenet_irssi> when you JSON-LD parse the json, it strips the data left out of the model.
<jbenet_irssi> we dont need to, no
<jbenet_irssi> a worry is we're going to start making "things that look like JSON-LD but arent and arent specified either"
<mildred> What we need is recognize the types we make use of (unixfs, commits, ipns, ...) for our own operation. External uses would be given the JSON-LD unprocessed...
<jbenet_irssi> as you said earlier, if we want to be LD-compatible and we want something simpler, we should specify very concretely what we want
<mildred> right
<jbenet_irssi> well, what i want is also to make a powerful application model where the user can make datastructures as easily as we can. i dont want to segregate them either--
<jbenet_irssi> meaning that unixfs should be easy to implement and "userland" in a way
<jbenet_irssi> (it is somewhat privileged by commands, but the implementation should be full userland)
<jbenet_irssi> (so that people can implement similarly useful things)
<jbenet_irssi> (and have a reference to how to do it)
<jbenet_irssi> (the moment we start doing things differently between "us" and "the users" things get bad for "the users".)
<mildred> The userland just have to run the JSON-LD processor on their side
<spikebike> jbenet_irssi: sounds good, I'm a big fan of eating your own dog food
<jbenet_irssi> hmm so you mean, parse CBOR -> JSON, then we (in our datastructs) do additional JSON-LD processing, and users can do that too if they want?
<mildred> Tell me if I'm wrong, but the JSON-LD is designed for REST API, the LD processor is designed to be run on the client (for GET requests anyway), not on the server
<jbenet_irssi> (maybe this is the easiest pragmatic thing)
<mildred> jbenet_irssi: yes, exactly
<jbenet_irssi> mildred: yeah i think that is correct.
<mildred> this provides the most flexibility for both the server and the client
<mildred> (and the server could do optimized parsing of wire format to optimized internal data structures, so great)
<jbenet_irssi> mildred: one additional wrench, all the merkle-links are JSON-LD objects: {"@type": "mlink", "hash": <multihash"}
<jbenet_irssi> well, we dont have a client/server model
slothbag has joined #ipfs
<jbenet_irssi> JSON-LD is designed that way, but in our world, everything is both a server and client
<mildred> s/client/users/
<mildred> I was thinking of web apps running using the gateway interface
<jbenet_irssi> yeah, but they will soon have a full node-ipfs implementation :)
<mildred> If we are going to use LD internally, I think that merkle-links should be encoded as URI. That fits nicely the data model
<mildred> A link would look like: {@type: @id, @id: "ipfs://HASH"}
<jbenet_irssi> that makes them very long
<mildred> the wire format can optimize this away
<jbenet_irssi> ok i guess not much
<jbenet_irssi> although, that looks very weird and magical
<mildred> it would, yes
<jbenet_irssi> {"@type": "mlink", "hash": "<hash>"} looks way friendlier to users
<mildred> yes, but it translates in a different RDF model
<jbenet_irssi> (and users will be touching the raw data, too, so the cleaner it all looks the better)
<jbenet_irssi> yeah, i'd be scared of touching '{@type: @id, @id: "ipfs://HASH"}'
<jbenet_irssi> but i would feel fine adding a property to {"@type": "mlink", "hash": "<hash>"}
<mildred> You can choose between:
<mildred> [source object] -> [link] -> [target object]
<mildred> or:
<mildred> [source object] --link--> [target object]
<mildred> (in the RDF model)
<jbenet_irssi> (which btw, is something i'd like to be able to let users do, which we can do with @vocab i think.
<jbenet_irssi> or rather, not even bother, as they get the raw JSON too)
<mildred> Adding a property in {@type: @id, @id: "ipfs://HASH"} would be like adding a property in the target object
<mildred> Adding a property in {"@type": "mlink", "hash": "<hash>"} would be like adding a property in the link node
<jbenet_irssi> ah yeah, i prefer the link node version
<jbenet_irssi> i think we're in a world where links are typed and it's convenient to be able to add proprties to link nodes.
<jbenet_irssi> we _could_ make the user make intermediate objects to link to, but this does introduce two mlinks to dereference, which is expensive.
<jbenet_irssi> hence the preference for embedded link nodes.
<mildred> The issue is that if you run RDF logic on top of that, you can't say things like "I want the file named "foo" in the directory" but you'd be required to say "I want the file that is the target of a mlink named "foo" for that directory"
<jbenet_irssi> (because you can always make non-embedded link nodes too if you really want them)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet_irssi> mildred: but that is what unix is doing under the hood, too. a dir has dir entries, and you look up the dir entry with name "foo".
<jbenet_irssi> and get the file that that entry points to.
<mildred> yes, you can have both, but you'll end up with huge objects. In any case, it's the person who will designe the JSON format who will decide
<jbenet_irssi> yep
<mildred> jbenet_irssi: you're right, that's not so bad either
<mildred> now, the hundred dollar question: how do we reference the data part of our objects in the JSON ? Or perhaps, there is no data part, it is all embedded in the JSON ?
<jbenet_irssi> yeah it's all embedded
<jbenet_irssi> this was one of the wins of moving to json-land
<mildred> (By the way, we would need to order important things first in the wire format, we don't want to parse a lengthy data structure to dereference a small link)
<jbenet_irssi> that links and data dont need to be separated as the links are self describing and we can parse them out
<mildred> yes, but in the current wire format, you don't need to parse the data if you are just following a link. You can truncate the object if you like. Faster processing
<jbenet_irssi> mildred: hmm that makes things more complicated because "important" may be a matter of opinion, and we have to force one canonical ordering so hashes check out
<mildred> that's the issue with JSON, there is no canonical order.
<jbenet_irssi> mildred: and it complicates serializer implementations. right now we can get away with a standard cbor parser
<mildred> We could define the order in the @context also
<jbenet_irssi> oh but there is a "canonical json" whose keys are sorted, and cbor too
<mildred> there are ways to make it work, yes
<mildred> didn't know that, thanks
<jbenet_irssi> which many if not most parsers will support correctly
<mildred> when designing the format, we just have to make sure that the important data gets the lowest keys in lexical order
<mildred> maybe it would be enough, or maybe that's over optimization...
<jbenet_irssi> i think it's not hard to scan, all of these things have length prefixes so can skip around quickly
<jbenet_irssi> ((( i was designing a format some time ago that automatically indexed the datastructure, putting all the offsets at the begginning for quick access
<mildred> probably, I think I'm over thinking the while thing
<jbenet_irssi> and which compressed the same keys in streams with ad-hoc schemas+tags like protobuf, and so on. but at the end of the day the wins are small compared to ease of use for devs/implementors/users)))
<jbenet_irssi> dev attention span does not increase exponentially
<jbenet_irssi> (yet)
<mildred> If I want to participate in ipfs development, where can I help ?
<ipfsbot> [go-ipfs] jbenet pushed 2 new commits to master: http://git.io/vGtbf
<ipfsbot> go-ipfs/master 63c7741 rht: Refactor FullMeshLinked and ConnectAll()...
<ipfsbot> go-ipfs/master 61cde12 Juan Benet: Merge pull request #1617 from rht/rm-full-mesh-linked...
<mildred> jbenet_irssi: and what's the status of ipns, can I help there ?
<jbenet_irssi> mildred everywhere! i think this -- finalizing the ipld stuff -- is the highest help
<jbenet_irssi> because we need this for records, which we need for fixing naming
<mildred> the issue tracker on ipld is empty save for one issue ...
<jbenet_irssi> i created this horrible dependency graph out of it, and we're almost out of the darkness
<jbenet_irssi> yeah sorry, the next step is finalizing multicodec and importing it into go-ipld
<jbenet_irssi> (( ok here we go, another wrench ))
<jbenet_irssi> so because we _dont_ want to break existing links
<mildred> In fact, these times, I have almost one tow hours I spend on the bus, could help with some code there
<mildred> So many repositories in github.com/ipfs, don't know where to start :)
<jbenet_irssi> and becuase we anticipate that in a lifetime there will come a day when people will want to change the core format to something way better, we're future proofing with "multicodec" -- https://github.com/jbenet/multicodec
<mildred> I so love this idea
<jbenet_irssi> this _does_ throw the wrench that we need a mapping from today's protobuf objects to the nice hot json/cbor-based objects we're describing. but thankfully that's not very hard, we just change the protobuf->json mapping (as there already is one)
<jbenet_irssi> (and the wrench that when hashing protobuf, we _dont_ hash the multicodec header along with it, else the current links wouldnt check out in hashes
<jbenet_irssi> ((that could mean "never take the header into account for the hash, as the content itself will be very different anyway" to avoid wart edge cases... but i think that's likely less clean than just having one edge case with protobuf
<jbenet_irssi> ))
<mildred> why not use existing mime types ?
<mildred> well, I'm still reading
<jbenet_irssi> mildred: oh, we could
<jbenet_irssi> mildred: that's probably what we should do. i guess one reason may be that they're long, and this header is replicated everywhere, but then i guess gzip.
<jbenet_irssi> daviddias: thoughts on using MIME for the multicodec keys?
<jbenet_irssi> that would save us a whole lot of trouble defining short keys for formats
notduncansmith has joined #ipfs
<mildred> We could also using RDF URI, like XML namespaces :)
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet_irssi> mildred one other option is using hashes of the codec program in some universal code as the name
<mildred> The idea of a format path fits more nicely in the IPFS framework though
<jbenet_irssi> which would make it possible for users to retrieve the code and run it.
<jbenet_irssi> (but that comes way later)
<mildred> all binary and unreadable :)
<jbenet_irssi> yeah, that's the downside.
<mildred> You have a problem with the hash of the format. The format is itself encoded using multicoded, which format ???
<daviddias> jbenet_irssi: just finished writing this https://github.com/diasdavid/node-ipld/issues/2 an idea I had last night that kind of kept me from sleeping, hope it makes sense :)
<daviddias> re: MIME - Are we hitting any issue with what we have now?
<jbenet_irssi> daviddias: hahah yeah now instead of doing a lookup like "/person" make the codec name the hash of the program
<jbenet_irssi> daviddias: it's unreadable, but computable
<jbenet_irssi> daviddias: another option is to allow resolvable links, like regular URLs, which would resolve at run time, but be more human readable or whatever.
<daviddias> I mention in the end that we can have a 'package manager' for data encoders/decoders
<jbenet_irssi> mildred hahaha yep.
<daviddias> like, /person/1.0.0 -> hash and store through that hash in the network
<jbenet_irssi> daviddias: ahh i see so one thing that resolves the name to the program
<jbenet_irssi> if the versions are guaranteed not to change, can cache them forever, etc.
<daviddias> like npm, if you 'publish it, there is no changin' it'
<daviddias> one can always publish a newer version, which can be good to fix data model problems or just for upgradability
<jbenet_irssi> daviddias: so one question, why put the multicodec value on the link, instead of in the data itself?
<jbenet_irssi> like what i mean is, is that just an optimization? or is the multicodec name _not_ on the data?
<jbenet_irssi> (optimization being that it's both in the data and on the link)
<jbenet_irssi> like i would expect the thing "/ipfs/QmbuH1ZExsQvzVEFFw9S2CivasHrQ9KmCy6zbxSymq8X5r/person-1" to begin with "/ipfs/link/person/'"
<daviddias> the idea was to have a way to describe the data first and then if it is describing a object (1st scenario) the decoder knows to look for the other fields in that object
<daviddias> if the multicodec references a link, then the decoder would know to get that value
<daviddias> also, for values that have slashes, it makes it hard for multicodec
<daviddias> like URL with paths
<mildred1> Why would the multicodec contains the string "person", this is not a format
<daviddias> foo: {
<daviddias> '@multicodec': '/www/link/person/'
<daviddias> '@value': 'http://someurl.com/person-1'
<daviddias> }
<daviddias> making this all part of the multicodec would have to resort into escaping chars, which can be done
<jbenet_irssi> mildred1: i think "person" would be a format there-- or a decoder of some sort
<jbenet_irssi> daviddias: oh i dont mean the whole thing, i mean
<jbenet_irssi> the think that is at "http://someurl.com/person-1", begins with the multicodec header "/www/link/person/"
<jbenet_irssi> thing*
<daviddias> ah, you mean, only after fetching we know the type of the data?
<mildred1> Why not just put the name of the codec, do we need a path ? We can always embed a multicodec value inside another multicodec value...
<jbenet_irssi> curl http://someurl.com/person-1 | head -n 17 == "/www/link/person/" or whatever
<daviddias> jbenet_irssi: wouldn't that be the same as referencing some link that returned this blob "foo: {
<daviddias> '@multicodec': '/person/'
<daviddias> name: 'Tim'
<daviddias> age: 9000
<daviddias> }"
jbenet_ has joined #ipfs
<mildred1> I have the impression here that this is very like the RDF schemas. If you want to define a vocabulary, you have FOAF
<jbenet_> daviddias: i think having the multicodec in the link _too_ is useful, but is an optimization. (i.e. i think the data should always carry the multicodec header)
<jbenet_> mildred1: the reason for paths is here: https://github.com/jbenet/multicodec#the-protocol-path
<jbenet_> mildred1 for me, a path is a more general URI, that works in unix.
jbenet_irssi has quit [Ping timeout: 250 seconds]
<jbenet_> mildred1 i think TBL reinvented the wheel there-- same as we may be reinventing all the RDF stuff now :)
jbenet_irssi has joined #ipfs
<mildred1> What's missing in go-multicodec?
<jbenet_> i'm pushing it up now, it's done. whyrusleeping pushed the last part (a license in a subpkg)
<jbenet_> i'm vendoring things atm
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias> Agree that having a header on the data helps, we can have both, one way where we have the 'describing' step separated from the 'data model' and another that they are just concatenated
<daviddias> for example if I own a right now stable JSON-API
<daviddias> you might consume it as you wish, or if you want, I can give you this multicodec that describes (an afterthought) how the data in the JSON API is formated, so you don't have to create a parser for yourself now, but I also don't have to change the API
<daviddias> for e.g github or twitter API and their versions. if we had a /twitter-api/1.0.0/person and this multicodec also revealed a decoder on IPFS, people could now use a old API with a Linked Data feeling
<jbenet_> yeah that souns good (and yeah the "self-describing" part of the multis is about putting the description in the data itself)
<mildred> Linked Data is designed so you could use any API version you want and still be compatible without having to special case for versions
<jbenet_> mildred: i think the idea is that this would work with non-LD
<jbenet_> mildred: as in, just package-management of everything
<jbenet_> mildred: which is very little work, but makes things possible without having to redo things as LD yet or whatever
<mildred> Well, LD doesn't seem so hard for our use cases. Just put @context:{ @vocab: http://example.com/unique-identifier-for/unixfs }. But it is not universal, I agree
<davidar> yeah, I like the idea of multicodec rather than committing to a specific format like ld
<mildred> But I think I would prefer to use the multicodec format instead of specifying @multicodec in the json stream.
<mildred> This way you are compatible with json documents that requires the @multicodec key to be absent. Think of the unixfs directory where the key is a file name. What if I want to name a file @multicodec ?
<mildred> Mime types can do that. For example you have application/xml which is just a serialization format, and you have application/xhtml+xml that is a xml format in the xhtml dialect
<mildred> what you want is to say you have a application/person+json type (a application/json document using the vocabulary to describe a person)
<jbenet_> mildred what about multicodec type: /mime/<mime-type> ? (ideally maybe a URI, but at least "mime" makes it obvious
<jbenet_> i say that beause not all format identifiers out there will be mime-friendly
<mildred> yes, mime types are very restricted
* daviddias has to run now, will be back after lunch time. Feel free to drop your notes and ideas on the issue as well :)
<mildred> but they have the notion of encoding for text formats, or dialects for a container (XML,JSON) format
<davidar> Mildred: you could do the same thing in multicodec, no?
<mildred> yes, it just has to be thought how to do it
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<mildred> how do you specify a format that is an extension of another format (JSON-LD is an extension of JSON, XHTML an extension of XML)
<mildred> how do you specify the format parameters (such as the text encoding)
<mildred> some formats make use of parameters in the mime type, do we want to support that as well https://tools.ietf.org/html/rfc4180#section-3
<mildred> (for example the text/csv media type could also be found like text/mydialect+csv; charset=utf-8; header=present)
<mildred> Do we want to encode all of this in the multicodec, if so, how ? We could also opt on not having all that
<jbenet_> mildred: ok go-multicodec is ready.
<jbenet_> now need to pull it into go-ipld
<jbenet_> which should be easy, just replace the use of https://github.com/ipfs/go-ipld/blob/master/coding/coding.go
<mildred> jbenet_ you want me to make the change?
<jbenet_> mildred: up to you, i dont have time right now, maybe take a stab at whatever you want? https://github.com/ipfs/go-ipld/issues
<jbenet_> mildred: settling the IPLD stuff may be the most important
<mildred> Ok, I might not have time until in 3 hours and a half, it's the end of my lunch break
<mildred> But I'll probably come up with something
<mildred> by the way, I'm in the CEST timezone
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot> [go-ipfs] jbenet pushed 3 new commits to master: http://git.io/vGq0a
<ipfsbot> go-ipfs/master 73e820a Pavol Rusnak: Add --empty-repo option for init (#1559)...
<ipfsbot> go-ipfs/master 872daf8 Christian Couder: t0020: add test for --empty-repo...
<ipfsbot> go-ipfs/master 9f253df Juan Benet: Merge pull request #1592 from prusnak/empty-repo...
<stick`> \o/
<jbenet_> :)
jbenet_i1ssi has joined #ipfs
jbenet_irssi has quit [Read error: Connection reset by peer]
<jbenet_> mildred: thanks! also, i'll be in Paris, and will grab lunch with chriscool ~9/10th. i dont think i'll have time for an IPFS meetup but heads up if you wanted to join us.
<jbenet_> (i'm in CEST too atm)
<jbenet_i1ssi> well.. /jbenet/CEST, which really doesnt mean anything
therealplato has joined #ipfs
<mildred1> I'm far from Paris unfortunately
<davidar> jbenet: do you actually sleep?
<davidar> I can never predict when you're online or not :/
Encrypt has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet_> mildred1: ah ok :)
<jbenet_> davidar: i am actually a robot.
<jbenet_> davidar: i have a terrible sleep "pattern", highly irregular.
<davidar> jbenet: as in, it changes, or fragmented?
<jbenet_> it changes a lot. sometimes can be fragmented.
<jbenet_> i've experimented with a lot of polyphasic sleep schedules. now i just sleep when my body wants to
<davidar> jbenet_: delayed phase disorder?
<ipfsbot> [go-ipfs] jbenet pushed 2 new commits to master: http://git.io/vGq6z
<ipfsbot> go-ipfs/master 4681db6 rht: Move dir-index-html + assets to a separate repo...
<ipfsbot> go-ipfs/master 1dac829 Juan Benet: Merge pull request #1487 from rht/gw-assets...
<jbenet_> no idea. maybe. i think i just decoupled from the sun
inconshreveable has joined #ipfs
<davidar> I have delayed phase. Hard to keep in sync with the sun, have to be conscious of sleep cycle
<davidar> Unfortunately rest of the world likes to sync with the sun :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> totally overrated :)
<davidar> cryptix: yeah, it's totally geocentric :)
<davidar> didn't they have their study where they stuck people underground, and they ended up with something like 36hr sleep cycles
<cryptix> heard about it - never searched tho
<ehmry> do we have python bindings to the api yet?
<ipfsbot> [go-ipfs] rht opened pull request #1618: Humanize bytes in dir index listing (master...dir-index-humanize) http://git.io/vGqHd
<davidar> ehmry: don't think so
<ehmry> ok
<ehmry> I think I want something for the http api?
<ehmry> I see someone volunteered julia bindings before python :\
<ehmry> https://github.com/ipfs/node-ipfs-api would be the best reference?
<jbenet_> ehmry yes
<ehmry> ok
mildred1 has quit [Ping timeout: 246 seconds]
<davidar> I can cr for python, if necessary
<cryptix> ehmry: i remember somebody made py bindings for the api but i cant find the repo anymoer :/
<Blame> I made some for testing ages ago
<Blame> but that was 2 pc wipes ago
<Blame> and im looking for a repo
<cryptix> ouch
<davidar> I imagine python should be a pretty straightforward translation of the js bindings anyway
<cryptix> (at least im still sane :P)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<davidar> cryptix: well... :)
<Blame> the requests lib makes it pretty easy
rht___ has joined #ipfs
<rht___> request for pin: QmeMZwyPnHMkvgdUNtdNcTX425gTCi5DCMeHLwTXbKoUB8
<davidar> Yeah, requests is pretty awesome
<rht___> can be online for at most 10 min
<cryptix> found the QmTkukZw6MBSfGZ2nTubdCsMeoKyNbrNidyGiJMUEh2dCx
<cryptix> python client in the irc logs ^^
<cryptix> rht___: got it
<davidar> Wow, that's even easier than I thought
<cryptix> blame: o/ :)
<cryptix> was a bit amazed it was still around
<cryptix> !pin QmeMZwyPnHMkvgdUNtdNcTX425gTCi5DCMeHLwTXbKoUB8
<pinbot> now pinning /ipfs/QmeMZwyPnHMkvgdUNtdNcTX425gTCi5DCMeHLwTXbKoUB8
<rht___> ok have to rush
<rht___> _/ _
<cryptix> cya
<daviddias> @jbenet thoughts on the multicodec-ld thing? Do you think that is an idea worth pursuing?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
qqueue has quit [Ping timeout: 246 seconds]
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
qqueue has joined #ipfs
<jbenet_i1ssi> daviddias: i've to think about it more. like what parts it differs from other things we've discussed and other self-describing-to-computation approaches. like-- a "Fully Self Describing Info System" would be amazing to produce, and i think we're significantly close to something that would make vint cerf proud. i've to think a bit more about how this piece would work and fit with others? (in particular i
<jbenet_i1ssi> n regard to full paths or not in multicodec, and how to grab cross platform code)
<jbenet_i1ssi> that reminds me-- is there a full x86 vm in js yet? i remember seeing one, but are they any good?
<jbenet_i1ssi> cause i really want "ipfs boot <path>" to "just work" without installing anything else.
mildred1 has joined #ipfs
<daviddias> I've seen some stuff too, but I can't vet for any.
<jbenet_i1ssi> whyrusleeping let me know when you're around
<daviddias> What could I build in order to unblock the development of IPRS that needs this LD structure for the new MerkleDAG node structures? I can do that research as well. Lost all of the issues+concerns that you have on that gh issue
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
slothbag has quit [Quit: Leaving.]
<davidar> what do you need x86 in js for?
<lgierth> because
<lgierth> :)
<lgierth> how is crypto code gonna run fast on arm otherwise???????
<davidar> hehe
<lgierth> a very smart performance hack, so to say
t3sserakt has joined #ipfs
* lgierth off to the bicycle repair shop
inconshreveable has quit [Remote host closed the connection]
mildred has quit [Ping timeout: 245 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet_> daviddias: sorry this is the next stuff there https://github.com/ipfs/go-ipld/issues
<jbenet_> daviddias (am working on starship atm)
inconshreveable has joined #ipfs
t3sserakt has quit [Quit: Leaving.]
t3sserakt has joined #ipfs
mildred has joined #ipfs
<daviddias> jbenet_: thanks :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Ping timeout: 250 seconds]
<stick`> !define ipld
<cryptix> stick`: linked data
<daviddias> IPLD - InterPlanetary Linked Data
<daviddias> it is what we are defining in order to have a more loose definition of a MerkleDAG node, so that other data structures can levarage the MerkleDAG links too
<stick`> i see
mildred has joined #ipfs
inconshreveable has quit []
mildred has quit [Ping timeout: 250 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jbenet_i1ssi has quit [Read error: Connection reset by peer]
kord has joined #ipfs
jbenet_irssi has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt has quit [Ping timeout: 260 seconds]
<pjz> lgierth: in general you can also use emscripten to turn random C code into javascript
<pjz> lgierth: and bypass the x86 virtualization layer
vitzli has joined #ipfs
rht___ has quit [Quit: Connection closed for inactivity]
<vitzli> Hello, I played with ipfs today, and stopped to say thank you, 'ipfs add -n' (chunk and hash) performance was really good - about 50 MB/s. (tested on neo4j repo if that matter, but on a random file it was about the same)
<Blame> \q jbenet_
manu has quit [Read error: No route to host]
manu has joined #ipfs
mildred1 has quit [Ping timeout: 246 seconds]
Encrypt has joined #ipfs
voxelot has joined #ipfs
voxelot has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
mildred1 has joined #ipfs
mildred has quit [Ping timeout: 250 seconds]
Eudaimonstro has joined #ipfs
<vitzli> it seems it crashes with 'fatal error: runtime: out of memory' on large files though
pfraze has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
vitzli has quit [Quit: Leaving]
jbenet__ has joined #ipfs
jbenet_i1ssi has joined #ipfs
jbenet_i1ssi has quit [Client Quit]
jbenet__ has quit [Client Quit]
jbenet_ has quit [Ping timeout: 256 seconds]
jbenet_irssi has quit [Ping timeout: 272 seconds]
<lgierth> pjz: i know :) just kidding
akhavr has joined #ipfs
<whyrusleeping> jbenet: i'm awake
Tv` has joined #ipfs
mildred1 has quit [Ping timeout: 240 seconds]
<daviddias> good morning whyrusleeping :)
<whyrusleeping> daviddias: g'mornin!
<whyrusleeping> how are things?
<daviddias> We've been discussing linked-data stuff, it has been difficult to set a decision in stone on how we are going to use it
<whyrusleeping> yeah, my mental image of how we are going to use it is really blurry
<daviddias> I wrote an idea to use multicodec as our format to describe the data https://github.com/diasdavid/node-ipld/issues/2
<daviddias> wanna review it? :)
<whyrusleeping> shure
therealplato has quit [Read error: No route to host]
<daviddias> also, there were more developments here - https://github.com/ipfs/ipfs/issues/36#issuecomment-135339059
therealplato has joined #ipfs
<whyrusleeping> cool, mildreds bringing up some of the same points i did
therealplato1 has joined #ipfs
therealplato has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
<whyrusleeping> daviddias: i dont really know that i have anything of value to add to the ipld conversation
<daviddias> ok, no problem :)
<daviddias> at least you are up to date in the convo
manu has quit [Quit: Computers. Bah! Who needs 'em.]
<whyrusleeping> daviddias: yeep, i'm very curious to see how this pans out. But if i were the one making the decisions, we would just be using protobufs
manu has joined #ipfs
mildred has quit [Ping timeout: 245 seconds]
mildred has joined #ipfs
<mildred> jbenet: I suspect that JSON-LD is going to be slower than our current IPFS objects. Do you have an idea how to solve that ? We would need to have the context object and execute a full JSON-LD processor to be able to get the list of links. It's not strainghtforward.
<daviddias> whyrusleeping: I must say that cbor serialization is pretty sweet, you don't even have to have a proto file, just say encode/decode and there it goes
mildred has quit [Ping timeout: 250 seconds]
mildred has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> daviddias: yeah, but its not any different than json in that aspect
<whyrusleeping> except that it gives nicer typing
Eudaimonstro has quit [Remote host closed the connection]
mildred1 has joined #ipfs
mildred has quit [Ping timeout: 265 seconds]
ygrek_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ygrek_ has quit [Ping timeout: 250 seconds]
bedeho has joined #ipfs
Encrypt has quit [Quit: Quitte]
<ipfsbot> [go-ipfs] DavidHowlett opened pull request #1619: ipfs get needed the -o flag to function as intended (master...master) http://git.io/vGYj7
<whyrusleeping> :(
Eudaimonstro has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dignifiedquire has quit [Quit: dignifiedquire]
<ehmry> multihashes are variable length, yes?
akhavr1 has joined #ipfs
<whyrusleeping> ehmry: they can be, yes
<ehmry> ty
akhavr has quit [Ping timeout: 240 seconds]
akhavr1 is now known as akhavr
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
patcon has joined #ipfs
amstocker_ has joined #ipfs
t3sserakt has quit [Quit: Leaving.]
<lgierth> davidar: ping
<lgierth> your ipfs instance needs a dialfilter
Eudaimonstro has quit [Remote host closed the connection]
Eudaimonstro has joined #ipfs
chriscool has quit [Ping timeout: 246 seconds]
<Tv`> hello people of.. err, interplanetary non-planet specific locales
<whyrusleeping> Tv`: heyo!
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
t3sserakt has joined #ipfs
dignifiedquire has joined #ipfs
<lgierth> davidar: pin
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
<lgierth> davidar: gotta kill your ipfs on pollux, simply add the filters i linked ^ when you're back and bring it back up
Encrypt has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> Tv`: question
<Tv`> whyrusleeping: condesdencing answer!
* Tv` . o O ( "<whyrusleeping> sneery retort about your lack of typing prowess!" )
<whyrusleeping> lol
<whyrusleeping> i was gonna go with 'snarky comment about lack of answer'
<whyrusleeping> i'm reading from an http response body and its actually reading data *and* returning an error
<whyrusleeping> and its reading the correct data
<Tv`> imagine tcp connection cut part way through
<whyrusleeping> shouldnt it just return the data it read, and an EOF or something on the next read?
<Tv`> if it actually came to the end of the data
<Tv`> even if you see a logically-whole body, that doesn't mean the sender said "i'm done now"
<whyrusleeping> hrm... how would the sender say that?
<Tv`> e.g. chunked encoding chunk size 0
<Tv`> for connection: close, clean tcp shudown
<Tv`> for http/2 some unicorn blood magic
<Tv`> for Go, imagine http handler going io.Copy(w, myFile); time.Sleep(eternity)
<whyrusleeping> ?
<Tv`> manual chunking? scary ;)
ygrek_ has joined #ipfs
<whyrusleeping> yeah, go's http lib doesnt support trailers
<Tv`> actually it does mpw
<Tv`> now
pfraze has quit [Remote host closed the connection]
<whyrusleeping> wait
<whyrusleeping> how?
<whyrusleeping> tell me pls
<whyrusleeping> Tv`: o.o
<Tv`> you declare some headers to be trailers
<Tv`> and then they are safe to set after sending headers
<Tv`> i don't see anything obviously wrong with the manual chunkwriter, apart from not checking errors
<whyrusleeping> oh shit
<whyrusleeping> this is amazing
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
patcon has quit [Ping timeout: 256 seconds]
pfraze has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
Encrypt has quit [Ping timeout: 265 seconds]
<cryptix> re trailers: ah nice!
Encrypt_ has joined #ipfs
<cryptix> Tv`: saw your post about dynamic json - also very nice :)
<cryptix> whyrusleeping: didnt close notify also block on the custom chunking?
dignifiedquire has joined #ipfs
Encrypt_ is now known as Encrypt
<whyrusleeping> yeappp!
<cryptix> funky!
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<alu> sometimes when i add something with ipfs
<alu> it cuts out without finishing
<alu> like at 70%
<lgierth> alu: how big are the files, and how many?
<alu> 80mb
<alu> 1 file this time
<whyrusleeping> alu: interesting...
<whyrusleeping> does it happen frequently?
<alu> hmm
<alu> more often nowadays
<alu> im on ubuntu 14.04
<whyrusleeping> huh... okay
Encrypt has quit [Quit: Quitte]
Eudaimonstro has quit [Ping timeout: 250 seconds]
<whyrusleeping> Tv`: you made my day, youre the best
<ipfsbot> [go-ipfs] whyrusleeping created real-trailers (+1 new commit): http://git.io/vG3eM
<ipfsbot> go-ipfs/real-trailers 1f280e1 Jeromy: use go's built in handling of trailers and dont do custom chunking...
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1621: use go's built in handling of trailers and dont do custom chunking (master...real-trailers) http://git.io/vG3eh
<Tv`> whee
Leer10 has quit [Excess Flood]
Leer10 has joined #ipfs
Encrypt has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> verry nice
<whyrusleeping> anyone have any idea what jbenet's sleep schedule is like now?
<cryptix> idk his earlier ip looked airportish?
<cryptix> he was heading for paris iirc
<whyrusleeping> oooooh, okay
<cryptix> the sadhack pkg is still annoying in go1.5 maybe we can // +build ignore it?
<whyrusleeping> will that work for godeps?
<cryptix> to pick it up? hrm yea
<cryptix> idk maybe not
<stick`> maybe the !sleepschedule <nick> command would do the trick :-)
<cryptix> well.. not for jbenet :)
<stick`> one would need to configure theirs schedule via query :)
therealplato1 has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
therealplato1 has joined #ipfs
Encrypt has quit [Quit: Quitte]
<whyrusleeping> jbenet has been idle for nearly 10 hours. thats strange for him. i hope hes not dead
therealplato has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt has joined #ipfs
<cryptix> whyrusleeping: no he was around earlier but couldnt reach irccloud (i guess)
chriscool has quit [Ping timeout: 240 seconds]
<cryptix> lol wat no.. sorry my timesense is messed up
<cryptix> tested #1621 locally with sharness - passed. rebased #1388 (skipping last commit) also passed
<cryptix> (p2p/net/mock TestNotification fails on timeout but it also did before on 1.5)
<whyrusleeping> cryptix: build ignore doesnt work for sadhack :(
<whyrusleeping> and 1388 working makes me happy :D
<whyrusleeping> i'm trying to debug why switching back to the default http client causes t0043 to fail...
<cryptix> diff?
pfraze has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
<cryptix> running
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> whyrusleeping: hm yea.. thats weird - no errors different hash
<cryptix> whyrusleeping: did you run this on 1621 alone? guess so
<whyrusleeping> yeah, just on 1621
<whyrusleeping> because without 1621 other things break
<cryptix> whyrusleeping: idk what to make of this - iirc the custom keepalive:false was set to get around blocking long requests in the client conn pool, no?
<cryptix> sharness tests just showing different hashes without errors confuses me
<whyrusleeping> the keepalive:false was used because we were hijacking the connection server side
<whyrusleeping> which means we either dont support keepalives, or manually implement them
<cryptix> kk
<whyrusleeping> it *should* let us reuse tcp connections for multiple http requests when its working
<cryptix> but doesnt the first request need to end first with http/1.1 pipelining?
<whyrusleeping> go manages all that
<cryptix> ie if the first one is long running the client cant send another request until its done
<cryptix> interesting
<whyrusleeping> i think, at least
<whyrusleeping> it keeps a pool of connections
ygrek_ has quit [Ping timeout: 256 seconds]
domanic has joined #ipfs
<cryptix> ok - ill assume that works :)
<whyrusleeping> lol
<cryptix> sigh - can we pin all the test data to the network?
<cryptix> test result -QmcCksBMDuuyuyfAMMNzEAx6Z7jTrdRy9a23WpufAhG9ji +QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH is annoying if you cant get the - one without running the tests again
<cryptix> i can use IPFS_PATH=trashdir for the new data
<whyrusleeping> i think the - one is hardcoded
<whyrusleeping> to be expected, no?
<cryptix> but for the expected id need to run a 2nd pass of the code
<cryptix> yea - running a working tree sure
<whyrusleeping> we should probably always comment whenever we have a hardcoded hash...
<whyrusleeping> maybe
Eudaimonstro has joined #ipfs
<cryptix> i still wonder why just the hashes changed but diffing that is a bit cumbersome right now - ill work on git-remote-ipfs push
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> yes, do that!
<whyrusleeping> lol
<ipfsbot> [go-ipfs] whyrusleeping created feat/rm-worker (+1 new commit): http://git.io/vG3Rg
<ipfsbot> go-ipfs/feat/rm-worker 06c7063 Jeromy: dont need blockservice workers anymore...
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1622: dont need blockservice workers anymore (master...feat/rm-worker) http://git.io/vG30Y
bedeho has quit [Remote host closed the connection]
pfraze has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ygrek_ has joined #ipfs
mildred1 has quit [Ping timeout: 256 seconds]
pfraze has quit [Remote host closed the connection]
t3sserakt has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
amstocker_ has quit [Ping timeout: 244 seconds]
dlight has quit [Remote host closed the connection]
gordonb has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> whyrusleeping: go-ipfs-shell.PatchLink: what does the create arg do? i basically just want add or replace to i need to rm-link first?
<cryptix> s/to/do/
Encrypt has quit [Quit: Quitte]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato1 has quit [Read error: Connection reset by peer]
dignifiedquire has quit [Quit: dignifiedquire]
domanic has quit [Ping timeout: 240 seconds]
therealplato has joined #ipfs
<whyrusleeping> cryptix: create will create intermediate directories on the way down
<whyrusleeping> i beleive
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> anyone from ccc want to sell their hackrf to me?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> heh - nice try
<whyrusleeping> :(
Eudaimonstro has quit [Remote host closed the connection]
<ipfsbot> [go-ipfs] whyrusleeping closed pull request #1361: CHANGELOG.md: Start tracking user-visible changes for 0.3.6 (master...tk/changelog) http://git.io/vI7vK
<ipfsbot> [go-ipfs] whyrusleeping deleted tk/changelog at f381a7d: http://git.io/vG3xq
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Tv`> sooo
<Tv`> if you were to use ipfs as a CAS store for *another* thing, but a thing that cares about its own keys, how would you do
<Tv`> +it
<Tv`> seems like there'd need to be a lookup table
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<voxelot> getting Error: Unrecognized option 'api' when trying to inspect the request to api from version 0.3.8-dev
Leer10 has quit [Read error: Connection reset by peer]
<cryptix> sigh.. nearly done with git-remote-ipfs but one weirdness.. prop. need sleep