<kyledrake>
Even just splitting out the vars into a config.json or something
<davidar>
lgierth: ^ thoughts?
<kyledrake>
I'm not sure if it makes sense or not. The purpose for me is to preserve path links (<img src="/cat.png">) and jail the security in a subdomain.
<davidar>
yeah, makes sense to me, and i think concerns about xss attacks on the public gateway have been raised here before (not sure if any other solutions were proposed)
<kyledrake>
FWIW I wouldn't run apps using IPFS via HTTP without using a subdomain. Otherwise all that data is available to every app (localStorage, document.cookies, etc)
<kyledrake>
The real solution is path based security but we're a long ways from that.
<davidar>
kyledrake: i guess i'm more interested in being able to spread parts of a dataset across nodes, rather than complete replication
<davidar>
but with enough duplication to be robust to nodes going offline
<davidar>
(or being malicious)
<davidar>
kind of like what internetarchive.bak is trying to do, but for ipfs :)
ianopolous has quit [Ping timeout: 246 seconds]
anshukla has joined #ipfs
<kyledrake>
davidar If you got that to work well it's a pandora's box for sure. The solutions out there are absolutely horrible.
<kyledrake>
There's a reason everybody is using S3 right now
Tv` has quit [Quit: Connection closed for inactivity]
<kyledrake>
It's a pretty big problem to tackle though. Things like self-healing, ability to see status and health, etc.
anshukla has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<kyledrake>
The ability to configure regions so you don't end up storing all three copies of some data in the same physical location where the sysadmin goes berzerker and drags a firehose into the DC
amstocker has joined #ipfs
<davidar>
haha
<davidar>
i guess something that mostly works would be a good start
amstocker__ has quit [Ping timeout: 246 seconds]
<davidar>
and i believe bitswap and filecoin are supposed to address some of those things at least
amstocker has quit [Ping timeout: 246 seconds]
<davidar>
ping jbenet
bedeho has quit [Ping timeout: 252 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
zabirauf has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
davidar o/
zabirauf has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<jbenet>
need to catch up with backlog. I'll be around in a while though
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt_ has joined #ipfs
<ipfsbot>
[go-ipfs] rht opened pull request #1641: Serialfile: Explicit err on unrecognized file type (master...namedpipe) http://git.io/vGSam
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atomotic has joined #ipfs
Eudaimonstro has quit [Remote host closed the connection]
<ipfsbot>
[go-ipfs] jbenet pushed 2 new commits to master: http://git.io/vGSor
<ipfsbot>
go-ipfs/master 4b3a21e rht: Humanize bytes in dir index listing...
<ipfsbot>
go-ipfs/master 9c40bf1 Juan Benet: Merge pull request #1618 from rht/dir-index-humanize...
Encrypt_ has quit [Quit: Quitte]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
anshukla has joined #ipfs
ei-slackbot-ipfs has quit [Remote host closed the connection]
ei-slackbot-ipfs has joined #ipfs
atomotic has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<lgierth>
davidar: we were thinking something else regarding hosting arbitrary pages
<lgierth>
CNAME example.net => gateway.ipfs.io (or rather ipfs.io very soon)
<lgierth>
and TXT example.net dnslink=/ipns/Qmfoobar
<davidar>
lgierth: but not everyone has their own domain name...
<lgierth>
that's tricky because your can't have cname and txt both at once, so it's be something like TXT ipfs.example.net dnslink=/ipns/Qmfoobar
anshukla has quit [Ping timeout: 244 seconds]
<lgierth>
yeah
<davidar>
also, i vote _ipfs.example.net so there's less chance of conflicts with real subdomains
<davidar>
(with an underscore)
<lgierth>
ok yeah that makes sense
<lgierth>
bbiab
hellertime has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Luzifer>
cryptix: ping
<jbenet>
maybe we should look for "<domain>" and "_dnslink.<domain>". (dnslink can thus be more general than just IPFS). And i say both because we may have cases where users have IPFS implemented at the browser level, which would use `/ipns/<domain>` and go straight for the TXT (avoiding the CNAME). the CNAME is a hack around today's world of {HTTP + DNS/A +
<jbenet>
DNS/CNAME}
notduncansmith has joined #ipfs
<davidar>
jbenet: what if you cname to a gateway that already has its own txt record?
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
davidar ?? <domain> is from the src domain? not understanding.
m0ns00n has joined #ipfs
<jbenet>
davidar so if i do `CNAME my.domain ipfs.io` i should also do `TXT my.domain dnslink=<dnslink-path>` -- but i cant, so i do `TXT _dnslink.my.domain dnslink=<dnslink-path>`
<davidar>
well, i guess it depends on which order you check
<davidar>
(i think)
<jbenet>
davidar the gateway resolution then checks the Requests' Host. It sees "my.domain", it then looks for "TXT my.domain" and "TXT _dnslink.my.domain" in that order
<davidar>
so, if you have:
<m0ns00n>
:)
<davidar>
example.com CNAME gateway.com
<jbenet>
(this may cause a problem but i'm not seeing it)
<davidar>
gateway.com TXT "dnslink=foo"
<davidar>
then, won't the cname cause: example.com TXT "dnslink=foo" also?
<davidar>
even if you have: _dnslink.example.com TXT "dnslink=bar"
<jbenet>
the TXT needs to be on example.com, not gateway.com. gateway.com may have one too, but unrelated to the example.com domain.
<davidar>
i know, but won't cname cause example.com to inherit any txt records on gateway.com?
<davidar>
hence, you'd end up with example.com resolving to an unrelated dnslink?
<jbenet>
would it? (i'm not familiar with this rule). on our side, we do the checking so we can totally just go to example.com
<davidar>
i don't really know, but i thought the point of cname was to inherit all of the records from the target
<davidar>
hence why root level cnames are discouraged since they cause problems with mx records
<davidar>
anyone more familiar with the specifics of dns resolution able to comment?
<mildred>
davidar: I'd agree with you, CNAME makes you unable to specify any other record, and you get the records of the CNAMEd domain
<davidar>
anyway, my point is that just going with _dnslink.example.com in all cases is less likely to cause problems, even if it's slightly less nice looking
<mildred>
Using underscore domains is also a common practice for TXT records
<jbenet>
(maybe we're not doing this right according to DNS, but) the CNAME for ipfs works this way: (1) CNAME causes request for example.com to land on gateway.com. (2) an HTTP server at "gateway.com:80" gets the request, sees a Host header of "example.com" so it rewrites the request from /$path to "/ipns/example.com/$path" (3) gateway now has
<jbenet>
"/ipns/example.com/$path" so it looks for a dnslink TXT record for example.com (checking TXT example.com and TXT _dnslink.example.com), if there is one) (4) re-handle as "<dnslink-value>/$path"
<jbenet>
rewrite/rehandle/rebase/
<mildred>
if example.com doesnt have a dnslink TXT record, I see no problem with this
<mildred>
sorry, s/example.com/gateway.com/
<mildred>
jbenet: I also wanted to talk to you about Linked Data
<davidar>
if gateway.com already has a TXT record
<davidar>
then dig -t TXT example.com will give you gateway.com's TXT record
<jbenet>
mildred: even if gateway.com has a dnslink TXT record, it should work. --- in reality, IPFS is not using, CNAME, it's the "web browser + dns client" combo that is using CNAME to land on gateway.com. this could also be a proxy on the client side, or a hosts file. IPFS doesnt touch CNAME directly.
<davidar>
in which case ipns will never bother to look at _dnslink.gateway.com, because it already has an answer
<jbenet>
davidar: my issue with _dnslink.<domain> always is that domains are restricted in length and it is typical to max that limitation out with hashes. so i dont want to create a world where "you cant use this trick with domain X because we need len(X) + 6 chars...
<jbenet>
davidar: wait, really? "dig -t TXT example.com" will give you gateway.com's TXT ?
<davidar>
i think so, yeah
<davidar>
haven't tested it though
<jbenet>
mildred: yeah i'm up for it. i haven't written down the conclusions daviddias and i got to the other night-- (a bit overloaded). i'll try to next.
<jbenet>
davidar: that would be annoying. we should figure that out.
<davidar>
jbenet: yeah, i could be wrong, but definitely make sure first :)
<davidar>
even if the spec says no, you never know what dns servers are going to do in reality :)
notduncansmith has joined #ipfs
<jbenet>
yeah but i think Go does its own resolution, not sure if it delegates to a local dns server.
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
((sidenote: im bothered that they dumped all the dns stuff into the net package, instead of net/dns -- http://golang.org/pkg/net/#LookupTXt)
<jbenet>
((sidenote: im bothered that they dumped all the dns stuff into the net package, instead of net/dns -- http://golang.org/pkg/net/#LookupTXT)
<davidar>
e.g. cloudflare dns likes to do cname flattening, so you can't necessarily tell whether records came from example.com or gateway.com originally
<davidar>
anyway, no idea if it'll actually be a problem, just don't assume the authoritative dns server isn't going to try to spite you :)
<jbenet>
indeed
<cryptix>
gmorning
atomotic has joined #ipfs
<davidar>
morning
voxelot has joined #ipfs
<davidar>
ugh, they don't make osm data easy to process
Encrypt_ has joined #ipfs
mildred has quit [Ping timeout: 246 seconds]
<Luzifer>
"morning" cryptix
<cryptix>
:)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
<mildred>
jbenet: what's the use case for non linked data JSON documents as IPFS objects ?
<jbenet>
mildred: i'll write up the json-ld conclusions daviddias and i reached now, and tag you next
<jbenet>
mildred: oh the main use case is a more flexible data structure for end users
<jbenet>
something that people can trivially use to represent arbitrary data structures in apps on the web
<mildred>
(we can always provide JSON on top of Linked Data (JSON or not), by embedding the JSON as a string)
<jbenet>
e.g. "dump your database directly onto the web"
<jbenet>
no that's precisely what i'm trying to avoid
<jbenet>
i want them to give us the json so we can embed it into one massive datastructure
<jbenet>
that is traversable
<mildred>
but then, you can't really use Linked Data directly. Having full JSON compatibility is not currently possible with JSON-LD, and you'll always get processing overhead.
<jbenet>
so for example, if i have A={"foo": {"bar": {"mlink": "/ipfs/<hash-of-B>"} } } and B= {"baz": [1, 2, {"p": "q"} ] }, i can do "/ipfs/<hash-of-a>/foo/bar/baz/2/p" and get "q"
compleatang has joined #ipfs
<jbenet>
as davidar and i both realized, think of one massive tree with merkle-links + immutability as the way to make it work in large disconnected + fast networks
<mildred>
Ok, that's a IPFS specific dialect.
<jbenet>
mildred: yeah, i thought JSON-LD was less restrictive and we could do it natively, but we likely wont be able to without severe @context hackery or our own parsers.
<mildred>
Better then to use plain JSON I think, and let people add @context themselves and interpret it as JSON-LD. Not do that ourselves
<jbenet>
mildred: the gist of what daviddias and i got to is that we're doing this in two steps: (1) move to using JSON, with { "mlink": "<multihash>" }, (2) think through how to make the -LD happen correctly. we can make { "mlink": "<multihash>" } work just fine within JSON-LD, it's the other typing that's harder.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<davidar>
jbenet: yeah, it would be very cool to dump a massive json object into ipfs and transparently address it like that :)
<mildred>
We can interpret the IPFS dialect find using JSON-LD. But IPFS won't be able to interpret any kind of JSON-LD (as I thought before) for obvious performance reasons among other things
<jbenet>
yeah and that may be ok. JSON-LD is a strict subset of JSON, so by making a JSON transport, we can transparently address JSON-LD data.
<mildred>
in IPLD, we'll in fact define a subset of JSON-LD we understand
<mildred>
It will be able to hold any linked data provided it is compatible with our format
<jbenet>
mildred the right thing to do is to think more about what would make JSON-LD much, much easier to use for mortals, and then suggest it as a sub-spec. i think most people would be ok using a restricted subset of JSON-LD if it meant they could LDify their data structures incrementally, not in one go, and without having to change their data structure at all.
<jbenet>
((btw, worth acknowledging this is "the good fight" in which the JSON-LD team already won a huge amount of ground relative to using RDF in web APIs. we just have more ground to cover))
<jbenet>
mildred: yeah exactly. i think we're on the same page
<mildred>
ok, I admit I'm a little biased, I spent the last few months working with RDF datasets :)
voxelot has quit []
<jbenet>
mildred: we'll also have to break from tradition and understand links like "/ipfs/<hash>" instead of just "<scheme>:<data>". wish i had a time machine.
<mildred>
jbenet: why doesn't IPFS URI don't define a scheme ?
<mildred>
If you want your URI to be valid in a RDF dataset, you must have a scheme I believe
<mildred>
Why did you make that choice ?
<jbenet>
mildred: hahaha yeah, _i_ dont have much qualms with using RDF. but i do recognize the massive complexity getting most people to use it. and because i want LD to win, i also want to make it easier for people to use it.
<jbenet>
mildred: i've yet to explain this succinctly somewhere, but the gist is this: the scheme identifier is not unix compatible. in the unix (and plan9) world, a protocol is a path component like everything else.
<mildred>
right, but the scheme identifier is internet compatible, and that's what most people use now before unix
<jbenet>
mildred: /http/foo.com/bar/baz works with any unix system. UNIX already had the URIs space. TBL and the nascent W3C somehow chose to part from this-- mostly to make life a little bit easier by making things look uniform in a world with vastly different systems.
<mildred>
Wish we had GNU Hurd translators
<mildred>
(and everyone was using Hurd, because it's obviously better)
anshukla has joined #ipfs
<jbenet>
mildred: the "unix path namespace" is all the URI we need, and it is very much internet compatible. the world of filesystems was very skeptical of the web naming at the beginning because people already used massively networked filesystems with pathing just fine. pathing is much more general and friendly and organic than having to use protocol scheme
<jbenet>
identifiers.
<davidar>
Just to clarify, json-ld is essentially just a way to describe schema for json, right?
<jbenet>
mildred the only problem is that the browser cartel only understands protocol scheme identifiers
<jbenet>
mildred: tbh i havent dug into the GNU Hurd memespace yet (it's on the stack, cause i know there's lots to learn there), but looking at https://www.gnu.org/software/hurd/hurd/translator.html i very much think we're aligned
<mildred>
davidar: yes, JSON-LD defines a context to tell you how to interpret the data
<mildred>
The problem is that Hurd is close to dead IMO
<mildred>
The other problem is that browsers make the law out there
<jbenet>
look at that! look at that ugly ":/". (that's how i feel about it, :/) it could very well just be /https/www.gnu.org/software/hurd/hurd/translator.html -- we just need to teach browsers to understand it.
anshukla has quit [Ping timeout: 246 seconds]
<jbenet>
mildred: a step along the way may have to define "unixweb:" and "unixweb:/" to work. (or "nixweb:/" ?
<jbenet>
i was going to go for "x:" or "x:/" but i haven't evaluated _how_ mad people will be at me.
<mildred>
The advantage of URL is that http://www.gnu.org/software/hurd delares that the path is defined in a RDF. Paths on the other hand are dependant on the special mount points you have on your computer
<mildred>
Or you standardize the root mount points like you standardize the URI schemes. But then it's still ambiguous, as you can choose whatever you want for your own computer
<davidar>
I seem to remember reading something that said tbl would prefer /https/org/gnu/software/... in hindsight
<mildred>
The advantage of URL is that they mean the same thing on every computer
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
yeah, that's no different than protocol identifiers, your system defines protocol identifiers, which tends to mean 99% of people only see {http, https}:// and OSes defer to asking the network what to do for unrecognized identifiers
<jbenet>
mildred this pars with-- sec
<davidar>
Mildred: Tell that to every script that starts with /bin/... ☺
<jbenet>
a goal is to shove everything in computers' / into /local and mount protocols in the root. thankfully systems have been restricting "/" to require sudo for decades, so it's not much of a mess.
<ehd>
hello!
<mildred>
How will that work on Windows ? :)
<mildred>
hello
<jbenet>
mildred: windows is one idea space i tend not to consider much in designs. the windows decision space is so full of bad decisions (non universal and complex) that i figure people can add one more kuldge to make things work there.
<jbenet>
(like for example, npm doesnt work well in windows because windows paths are limited in length and people hit the limit often
<jbenet>
form a systems perspective: (╯^_^)╯︵ ┻━┻
<mildred>
unix paths just have a larger limit
<mildred>
But I do agree with you nevertheless
<jbenet>
yeah, _some_ limit has to exist, or it would be a huge hazard, but their limitation is very small compared to what paths are used for all over the place.
* mildred
likes to argue for no other reason
<ehd>
daviddias: sorry i haven't helped out with libp2p/ipfs.js things yet. very much tied up in other work, sorry
<jbenet>
http:com/ ? :( that may be even worse. not even http:dns!
<mildred>
never ming, I understand
marianoguerra has quit [Quit: leaving]
<davidar>
I dunno, I kind of prefer com/example/www/ to www.example.com/
<jbenet>
well, not worse, you can still make it work. ok i would've been happy with http/dns/example.com/<path> or even http/example.com/<path>, and use some reserved keyword for a "host-relative" path (today's "absolute path in this host") like $host/foo/bar anywhere in /http/example.com would tell the browser to go to /http/example.com/foo/bar
<jbenet>
davidar: yeah me too, but then we have to redo dns too.
<jbenet>
Oh! now i parse it
<jbenet>
"http:com/example/foo/bar/baz" is "http" for "com/example/foo/bar/baz"
<davidar>
Yeah
<jbenet>
i parsed "http:com" together because my mind is _so_ stuck to unix paths :)
<jbenet>
and lots of other systems use : as a second separator in paths.
<jbenet>
like "A:a/B:b/C:c"
<mildred>
jbenet: How do you know which keys to expand and which you don't What if you have an object {"mlink": {"mlink": "hash"}, "README.md": {"mlink": "hash"}} that represents a directory containing a "mlink" file and a "README.md" file...
<jbenet>
mildred: hm i thought we addressed that but now i can't recall. maybe we need { "@type": "mlink", "hash": "<hash>" } back?
<jbenet>
cc daviddias in case you recall how to distinguish mildred's case... o/
<mildred>
and if I have a file named "@type" in my directory, same problem here
<daviddias>
ehd sure no problem, not need to sorry :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias>
mildred: jbenet if you have things named mlink, the dev would have to use @type per obj to avoid a global @context that would remap things
<daviddias>
If I recall and understand correctly, it is the same solution native json-LD offers
<daviddias>
It is hard to map keys to contexts if you have keys that are extra to the LD of the context and that you want to preserve
<daviddias>
I guess that's why json-LD decided that the "best" option would be to remove anything that wasn't part of the context and make the developers accept that if something has the key defined in the context, the it is part of the context
<mildred>
The difference is that for Linked Data, what holds the meaning is the URI corresponding to the type that is in the context. If you have a key in your JSON that is in conflict, you can remap the key. The type URI won't change
<daviddias>
Right now the version of ipld that I made in node (check diasdavid/node-ipld) maps every mlink to our URI of merkle-link
<mildred>
In our case, we have a trickier problem as the JSON keys hold a meaning (in our cases the filenames in a unix directory). We can't remap them
<daviddias>
True, but in the case, I can make a json blob that doesn't use a global context, but uses @type instead
<daviddias>
We are not covering an elegant solution for that, but I guess we can always check if it's an actual valid URI in case of conflict
<daviddias>
We are open to ideas though
<daviddias>
ipld at this moment is the minimum viable ld so that we can have what we want for MerkleDAG objs, that doesn't break json-ld schemas.
<mildred>
I'm not so sure we aren't breaking JSON-LD processing with the format described currently. We cal always run our own processor but I'm not sure any processor would work.
<mildred>
If you requires the '/' character not to be part of the link name, you could always mandate links to start with "/"
<mildred>
You would have: {"/mlink": {"mlink": "hash"}, "/README.md": {"mlink": "hash"}}
anshukla has joined #ipfs
<mildred>
If the JSON key contains a "/", it is a link. else it is a key to be looked up from the @context.
<mildred>
What do you think of that ?
<daviddias>
That could work, yes :) but I guess that is a Hack in how unixfs should store files in the MerkleDAG and not part of ipld spec itself. This way it is up to the developer to overcome a specific challenge of their data structure, depending on the data they are dealing with
<daviddias>
For example, IPRS doesn't have that problem
anshukla has quit [Ping timeout: 265 seconds]
thomasre_ has quit []
<mildred>
In IPLD spec, you still have something that compact links, right ?
<mildred>
or is it specific to unixfs ?
<daviddias>
Right now we have just expand
<daviddias>
So that people write them and a MerkleDAG processor knows how to interpret them
<mildred>
In fact, the IPLD spec is a subset of JSON (link compaction, can't use keys starting with '@') while being a superset of JSON-LD (can be parsed by JSON-LD processors but would loose specific IPLD meaning)
notduncansmith has joined #ipfs
<jbenet>
mildred: i dislike that because `/<thing>` is typically an absolute path across systems.
notduncansmith has quit [Read error: Connection reset by peer]
<mildred>
jbenet: would that convince you: {"mlink/": {"mlink": "hash"}, "README.md/": {"mlink": "hash"}}
<daviddias>
jbenet: the prefixing of a char might be a workaround if a system needs to have all the keys in the world available
<jbenet>
(hah), no, how can we make it work by just changing _the data that needs to be special_
<jbenet>
what i mean is that it is typically a huge problem to introduce any contraint like (oh just change all the other keys to ...)
<mildred>
It's either that or something like: {"links": {"mlink": {"mlink": "hash"}, "README.md": {"mlink": "hash"}, "some-dir": { "links": { "test.c": {"mlink": "hash"} } }}} which is 100% compatible with JSON-LD
<jbenet>
i'd go for that before the others-- but, help me understand i'm not seeing the problem.
pfraze has joined #ipfs
<jbenet>
why does {"@type": "mlink", "hash": "<multihash>" } not work?
<mildred>
The problem is that there is a possible conflict between a filename and an IPLD/JSON-LD specific key (@type in that case)
<jbenet>
right, the reserved keywords
<mildred>
I want to have a file named @type
<mildred>
either you namespace it by prefixing/suffixing the key strings, or you put the keys in a sub-object that will never contain a @type value (that's the meaning of @container:@index by the way)
<jbenet>
mildred: so this could be fixed if unixfs tree said: { "@container": "@index", "foo": { ... }, "bar": { ... } } ?
<jbenet>
?
<jbenet>
(fwiw, i think the `"links": { ... }` solution is the best so far).
<mildred>
jbenet: not quite. More like { @context: {files: {"@container": "@index"}}, files: {"foo": { ... }, "bar": { ... } } }
<reit>
is there currently a method of using something like object patch to add an object to a child further down in the tree? so like ipfs object patch <hash>/some/directory/structure add-link myfile.html <hash> where it recursively rejigs the tree and returns the new root node
<jbenet>
mildred: can't put it on the toplevel dict?
<jbenet>
reit: yeah, i think ipfs object patch supports that alraedy? (we called it "bubbling up")
<mildred>
jbenet: you'll have the same problem if you want a file named "@container" :)
<jbenet>
mildred sorry no i meant in the @context. but yeah there is a problem in general with reserved keywords of any kind, but that's fine
<jbenet>
so i guess: { "@context": { "@container": "@index" }, "foo": { ... }, "bar": { ... } } ?
<mildred>
I really like the last JSON that I put it here. Fully compatible with JSON-LD, I like it a lot
<reit>
jbenet: ah, i tried that but i'm just getting Error: incorrectly formatted root hash
<reit>
perhaps it's a recent addition
<jbenet>
isn't this one compatible though? o/ (aside from being able to have a file named "@context")
<mildred>
jbenet: a file named "@context" ...? but yes it would work if the context is transmitted out of band (outside of the JSON)
<jbenet>
reit: no it should be lik e0.3.6 or 0.3.7
<reit>
0.3.8-dev here
<mildred>
jbenet: transmitting the context out of band is specifically permitted by the JSON-LD spec
<jbenet>
mildred: ok. it's not ideal but a possibility. i also like yours, but i wanted to have super super simple directories
<mildred>
jbenet: don't forget that sub directories will have to use an extra level of indirection: { "foo": { ... }, "bar": { "links": { "baz": {...} } ... } } with a context containing { "@container": "@index", "links": {"@container": "@index"} }
<jbenet>
the introduction of a level is annoying.
<jbenet>
(but it also gets rid of other annoyances)
<lgierth>
davidar: tl;dr, we probably can't use CNAME
<jbenet>
mildred oh interesting, a result of this is that we could treat trees not like single dirs, but as union dirs.
<mildred>
It also has the advantage that you can specify some property about the complete directory
<lgierth>
because CNAME example.net => ipfs.io would inherit all of ipfs.io's records, like MX, TXT, etc.
<jbenet>
which may make dir sharding easy/weird and interesting.
<lgierth>
right?
<jbenet>
not sure if the simple unixfs dir should do that, probably not, but it's interesting for other things
<jbenet>
"overlaytree"
<jbenet>
lgierth: is this why github says use "A" record for some?
<lgierth>
yeah probably
<jbenet>
lgierth davidar: we could just tell people to use A records instead. a bit annoying, but hey. option for CNAME is to flip the order of resolution: always check TXT _dnslink.<domain> first, then TXT <domain>. that way _you_ owner of <domain> can always have the last word, as you wont CNAME _dnslink.<domain>
<lgierth>
jbenet: if we say use A instead of CNAME, we don't even need the _dnslink.<domain> dance
<mildred>
By the way, I made a test on my domain (required some time to propagate DNS): dig @nsa.bookmyname.com. example.mildred.fr TXT
<mildred>
The _dnslink subdomain is the same mechanism that is used by XMPP/Jabber
<mildred>
It's something that exits, and is supported. People won't find it strange
<lgierth>
i think it's problematic that a CNAME record also inherits the target's MX records
<jbenet>
mildred: and you specified a TXT on example.mildred.fr that is not being shown?
dignifiedquire has joined #ipfs
<mildred>
no, you can't
<mildred>
If you specify a CNAME, you can't specify anything else
<jbenet>
lgierth: yeah that's another reason github says A problably.
<jbenet>
suddenly their mail server gets all your mail.
<lgierth>
mildred: which is why we're coming up with _dnslink.<domain>
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth>
but it looks like CNAME is too problematic so we probably don't need _dnslink.<domain>
<jbenet>
lgierth: reason i want to support both TXT _dnslink.<domain> and TXT <domain> is limited domain lengths being a problem with hash subdomains. domains are limited to ~260 chars
<mildred>
If I try to specify a CNAME with a TXT, I get: "CNAMEs can't co-exist with other records TXT"
<jbenet>
mildred: ahh fun.
<jbenet>
is there another way, other than CNAME and A, to say "i want domain X to use the A records of domain Y" (not _all_ records)
<jbenet>
i guess not
<mildred>
I don't think so.
<mildred>
You can always do that by hand
<lgierth>
jbenet: i don't understand how _dnslink.<domain> (9 additional chars) helps with domain length ;)
<mildred>
I don't think that adding the length of "_dnslink." to the domain is going to be a problem. or are you planning to put multihash in the domain string .
<zorun>
using A/AAAA records defeats the purpose of DNS
<zorun>
(that is, not to depend on IP addresses)
<lgierth>
zorun: yes. but a CNAME inherits *all* records of the target
<mildred>
zorun: I also prefer CNAMEs but some people consider it a bad practice and prefer A/AAAA because it reduce the DNS query time
<zorun>
as soon as people point to you, you just can't change the addresses of the gateway anymore
<lgierth>
zorun: you can, just not transparently
<zorun>
lgierth: well, all the records of a subdomain, so that's ok I think
<lgierth>
you need a migration period, make announcements, etc.
<lgierth>
zorun: we wanna move the gateway to ipfs.io very soon
<zorun>
if I point foo.mydomain.com to gateway.ipfs.io using a CNAME, I don't see the issue
<lgierth>
although, mmh, we could have people point their CNAME to gateway.ipfs.io and keep that subdomain around for that purpose
<lgierth>
yeah
<zorun>
anyway, you need to trust the people running ipfs.io
<lgierth>
then there's nothing to inherit except the A/AAAA records
<jbenet>
zorun: i'm well aware. the point is that DNS is really over complicated, and that hash names are much safer.
<jbenet>
zorun: IPNS is better at that "primary usage of DNS", relegating DNS to only easy-to-remember-for-humans. and it replaces the stupid registrar subdomain hierarchy with just any signatures + PKI
<zorun>
well, yes, but saying this doesn't solve this (infrastructure) problem, which is completely unrelated to IPNS
<zorun>
(IMHO)
<jbenet>
zorun: i said "god, non-hash names are so hard to make work." and i still think so :)
<zorun>
I see :)
<zorun>
jbenet: the registrar system has nothing to do with DNS, btw
<zorun>
it's only a way of making money
<jbenet>
i understand, but it's how ICANN operates, and they rule DNS.
pfraze has quit [Remote host closed the connection]
<jbenet>
nobody uses the non-ICANN DNS roots
<zorun>
sure
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] jbenet pushed 2 new commits to master: http://git.io/vG99d
<ipfsbot>
go-ipfs/master b45e824 rht: Use filepath.Walk to compute serialfile total size...
<ipfsbot>
go-ipfs/master 018e20a Juan Benet: Merge pull request #1628 from rht/size-walk...
<lgierth>
i'll take a couple of notes from this discussion
<jbenet>
lgierth: yeah thanks
<jbenet>
mildred daviddias: so IPLD.... did we resolve it? "@container": "@index" for unixfs? or `"files": { ... }` sublevel?
<mildred>
yes, that seems good to me
<jbenet>
it would be nice not to have a sublevel, and if we can do it with a proper @context, then we dont have to worry about it for now, i think. we can limit our stuff to recognizing the "@type": "mlink" stuff
<jbenet>
mildred: which one?
<mildred>
the sublevel is not just for JSON-LD. it's to avoid conflicts between file names and @... identifiers
<mildred>
If you want to avoid the first sublevel only, you can do so by transmitting the context out of band
<jbenet>
mildred: wait, does JSON-LD have a way to escape the @?
<mildred>
they have a way to tell that a JSON object can contain @ directives or contains only keys using the @container, but you cannot mix the two
<jbenet>
mildred: so they forbid any top level key named "@context" ?
<mildred>
I think so
<jbenet>
(like obviously, but i mean, they dont have a workaround if there's a problem)
<mildred>
you can use @type on the root object by remapping @type to something else, but not @context
<jbenet>
i think escape would be the solution here, dont you think?
<jbenet>
it's the unixy solution, too. unix uses escapes all over the place, for quotes, \specialchars, etc.
<mildred>
not in paths, character '/' is not allowed
<jbenet>
that's true. but they do it for spaces: "cd foo\ bar/
<mildred>
adding a "/" prefix was my idea to escape paths. Just remove the first "/" characters and you have the original keys
<mildred>
it's easier also. escaping using \ can be tricky (\\\" or \\\\\\" anyone ?)
<jbenet>
ah, i see. but when do you know when to do it and when not to?
<mildred>
you always do it for data that is not compatible with Linked Data. Linked Data doesn't need it as the type URI is the reference (the JSON key can be anything, if the key conflicts, choose another one)
<jbenet>
but i think the idea that {"@context":A, "\@context":B} --xform--> {"@context":B} would not be too confusing, no?
<jbenet>
so _all non-LD keys ever_ would need /
<jbenet>
even when you're not using LD, nor @context, nor anything
* mildred
prefers the extra level of indirection, it has the advantage of being compatible with JSON-LD directly
<jbenet>
yes but now you have a "files": level injected everywhere in the hierarchy
<mildred>
right, but it's more specific. You have a root object (a file list). One of its property is the actual file list.
<jbenet>
it blows up the JSON tree by inserting nodes all over. it might be ok, but it's less simple.
<mildred>
You can also add more properties such as the unix permissions
<jbenet>
i was thinking the unix permissions can go on the link. this is not JSON-LD friendly.
<mildred>
less simple. If if you want extra simple, I suggest you dump JSON-LD entirely
<mildred>
give the ability to either have JSON with a LD context transmitted out of band, or a raw JSON with no context
<jbenet>
"if you want extra simple, I suggest you dump JSON-LD entirely" that's sad. :( -- we should talk to the WG about this and try to get an even simpler thing to exist.
<jbenet>
---in most cases users will just want JSON, let's not mess with that. in some subset of cases people will have JSON-LD (like us adding an @context). if
<mildred>
The problem is that some datastructures don't fit so well in the LD contexts, maps in particular.
<jbenet>
if we were to do the \@context escape, does that "solve everything" ?
<mildred>
what do you mean by solve everything ?
<mildred>
if you do the \@ escaping:
<jbenet>
provided this is only in the case where users want both LD clash with LD directives.
<mildred>
- the data will not be compatible with a Linked Data object model
notduncansmith has joined #ipfs
<mildred>
- but you will be able to specify names that clash with @ directives
notduncansmith has quit [Read error: Connection reset by peer]
<mildred>
if that solves everything for you, why not
<jbenet>
you mean, because the links cant have data? (that's the only thing right? but that's solved with "@container: @index" no?
<jbenet>
(sorry if i'm making you repeat yourself. all this malleability is mind melting)
<mildred>
I don't understand your question
<mildred>
I think I do...
<mildred>
@container:@index is a way to tell a specific JSON object is a map of other objects. The map, because it is a map and not an object cannot contain directives or LD properties. And the objects children of that map must be linked data objects that are not containers (the extra level of indirection)
<jbenet>
you said "- the data will not be compatible with a Linked Data object model"
<mildred>
In that form: no. if you transmit the context out of band, probably yes
<mildred>
and foo/bar must contain LD objects, not maps. ie. the foo and bar objects cannot have arbitrary keys
<mildred>
foo and bar objects are the extra level of indirection.
<mildred>
jbenet: you're ok with that ?
<jbenet>
mmm not exactly, cause we'd like to have abritrary keys in links and so on. the more we try it the more it's very much JSON-LD XOR JSON. not, JSON-LD sprinkled-on JSON
<mildred>
I don't think this data structure where you have a @container:@index that points to another @container:@index is possible. it excludes LD objects from the equation. We should perhaps check with specialists of JSON-LD
<mildred>
The problem is that a @container:@index JSON object is meaningless from the Linked Data point of view. It's just an optimization to access some objects more easily. It is not possible to specify a Linked Data property in it.
<jbenet>
mildred ok: suppose we drop full JSON-LD. there is value to the aliasing, because it lets us do shorthand names for full IPFS paths representing types.
<jbenet>
mildred: do you think the expansion we were discussing is ok?
<mildred>
yes, there is
<mildred>
apart from the problem of escaping file names, it has some value
<mildred>
it is simple (which JSON-LD is not) when we need to traverse links
<mildred>
I like prefixing the keys with "/" at the beginning, because when concatenating you don't need to add a separator, it's already there
inconshreveable has joined #ipfs
<mildred>
and it states clearly that "/" is special, you better not use it in link names
<mildred>
\ is more backwards and already has a meaning in JSON. You'll need to write "\\@context"
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
m0ns00n has quit [Remote host closed the connection]
<jbenet>
\ has a meaning in json? it escapes ", but nothing else, no?
<jbenet>
mildred: prefixing all keys with / is not good. an explicit goal is to be able to import large bodies of json and serve them as is.
<mildred>
I was thinking this would be just for unixfs
<mildred>
other JSOn types would use their own escape mechanism, or not at all if they don't need it
<jbenet>
btw, one nice thing is that a json object having {"foo": {"bar":1}, "foo/bar":2} is not "broken". i.e. it can still work in ipfs, because traversal is deterministic (i.e. lexicographically ordered. so "path/to/obj/foo/bar" -> 1. (( whether "path/to/obj/foo\/bar" -> 2 is a good idea is left for another day.
<jbenet>
(not that i intend to construct such things, merely that they can be ingested
<mildred>
That's interesting. Why not
<mildred>
If the idea is to have any JSON possible, this is not so absurd
<jbenet>
cool.
<jbenet>
ok, i'm not yet sold on "/<property>" for unixfs. i'm more sold on escaping the @context, but i guess it's a unix-fs specific thing TBD.
<mildred>
So, IPLD in in fact a subset of JSON that is able to define a full JSON document without restriction (using escape mechanism), including JSON-LD is we want
<mildred>
s/is/if/
atomotic has joined #ipfs
<mildred>
I'll have to go take the bus
mildred has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias>
Does anyone has the time to go through this issue: https://github.com/hildjj/node-cbor/issues/21 it feels a bit weird that node-cbor returns always an array of objs, even if only one obj was encoded and not in array format. I know it would be just do decoded[0], but it feels a bit weird. Would love to have more opinions to understand if it important for more
<daviddias>
people, or if I should just cope with it
anshukla has joined #ipfs
bedeho has joined #ipfs
voxelot has joined #ipfs
<lgierth>
daviddias: heh, doesn't that break value = decode(encode(value))?
pfraze has joined #ipfs
<daviddias>
it does
<daviddias>
you would have to: value === decode(encode(value))[0]
<dawuud>
would tor be considered thinwaist? or put another way... are these code comments about thinwaist relavant?
<josephdelong>
whyrusleeping is my hero
<josephdelong>
thank you
<josephdelong>
btc address?
<dawuud>
after auditing some code in go-ipfs and go-multiaddr-net i can see that DialArgs needs to include a conditional to handle using the TorDialer...
<dawuud>
i think this is where cjdns will insert their "Dialer" as well
<whyrusleeping>
josephdelong: lol, if you really want my address, i use the same username on coinbase.
inconshreveable has joined #ipfs
<josephdelong>
i’ll look you up. You saved me a lot of research time.
<whyrusleeping>
don't worry about it, that what i'm here for :)
inconshreveable has quit [Read error: Connection reset by peer]
<dawuud>
jbenet: ^ ?
inconshreveable has joined #ipfs
<whyrusleeping>
dawuud: i think your tor addresses would be considered thin waist
jamescarlyle has quit [Remote host closed the connection]
jamescarlyle has joined #ipfs
<ReactorScram>
IPFS over Tor?
<ReactorScram>
I thought I2P would be a natural fit since they already adapted Bittorrent to run on it, but I don't know much about i2p
<whyrusleeping>
ReactorScram: we dont have too many people around that are familiar with i2p
<whyrusleeping>
otherwise there would probably be efforts towards that end
ygrek has quit [Remote host closed the connection]
<dawuud>
ReactorScram: yes. IPFS over Tor hidden service aka onion services
<ReactorScram>
dawuud: I would be happy to see that working
<ReactorScram>
Tor is way easier for me to understand than i2p
<ReactorScram>
and doesn't need Java
<dawuud>
i hope to having a proof of concept soon
<dawuud>
whyrusleeping: thanks... yeah in that case it's fairly obvious where to make the code changes... however the user needs to specify tor specific information... and it is not clear what the best way to represent this is...
<dawuud>
perhaps a plugin system?
Encrypt_ has joined #ipfs
<blame>
+1 plugin systems for all the things
compleatang has quit [Quit: Leaving.]
dignifiedquire has joined #ipfs
zignig has quit [Ping timeout: 255 seconds]
atomotic has joined #ipfs
patcon has joined #ipfs
<voxelot>
-1 for pluggins for all things, webRTC for all things :)
<blame>
-1 webRTC for any of the things. Bootstrapped connections require 3rd party infrastructure and may be vulnerable to man-in-middle attacks :P
<blame>
I like the idea, but id really prefer a way for clients to directly connect.
<dawuud>
i dislike webrtc because it has web in the name
<blame>
voxelot: right now my working solution is hosting websocket servers on every member of a p2p network, so browsers can act as full clients.
<dawuud>
i wonder if the cjdns guys have given this some thought and code auditing yet...
<dawuud>
are you trying to make me vomit?
* whyrusleeping
grabs popcorn
<dawuud>
webrtc seems to be the sendmail of nat penetration
<dawuud>
it's big and clunky and has lots of moving parts
<dawuud>
isn't reliable or secure etc
<blame>
I'd feel better about webrtc if there was a p2p network of middlemen, so we could ue diffie-hellmanish things to pick one together without broadcasting which one beforehand
<dawuud>
...but i might change my mind soon
<dawuud>
i'm going to have a gethering with some friends to argue... i mean debate about this stuff
<whyrusleeping>
blame, long time no see!
<blame>
then there is still a built-in man-in-the-middle but at least an attacker does not know which on you are going to use beforehand
<whyrusleeping>
how's the thesis thing going?
<blame>
whyrusleeping: \o I figure I should hang out here more
<blame>
and I'm in the right headspace to be usefull
<whyrusleeping>
woo!
<blame>
webrtc is a centralized solution to p2p. It is well designed and works.
<blame>
but it does not meet the critera I want :/
jamescarlyle has quit [Remote host closed the connection]
<blame>
SO I just found a 5-year old paper on essentially what I was planning for my dissertation
Leer10 has joined #ipfs
<blame>
I hate academia. I can spend months working on something (and activly looking for papers) and then find somebody has already done it 5 years ago >.<
<blame>
The good news, is they did the bits I thought would suck (and It looks like they sucked) so, im just gonna cite them for those bits...
<dawuud>
i think that there are these design tradeoffs... and you can have your use cases that work well with webrtc...
mkarrer_ has joined #ipfs
mkarrer_ has quit [Remote host closed the connection]
<dawuud>
that doesn't mean i would use it ;-p
Encrypt_ has quit [Quit: Dinner time!]
<blame>
dawuud: so, counter-offer to traditional webrtc:
mkarrer_ has joined #ipfs
<dawuud>
blame: Tor onion services
<voxelot>
blame: could you explain a bit more about your websocket idea?
<voxelot>
wouldn't this require the client running a browser as a node to have a server up?
<blame>
yes. (I like tor, but overhead in using it is too high for fast adoption right now.
<blame>
voxelot: no
<blame>
its an idea that hurts for people who have been thinking in terms of p2p for a long time
<dawuud>
blame: not fast enough for your use case?
mkarrer has quit [Ping timeout: 240 seconds]
<blame>
browsers, cellphones, ect make por dht nodes for infrastructure services
<dawuud>
blame: yeah we should do everything fast... eating fast help digestion, fucking fast... well you know...
<blame>
dawuud: I mean difficulty of the user to set it up
<blame>
dawuud: my idea would run in a browser
<dawuud>
blame: agreed... deployment is a problem.. but we're working on some things to make it easier
<voxelot>
ease of use is my main concern so we get more people on these meshnets
<blame>
voxelot: so, we have them act as clients only
<voxelot>
interesting, and the full nodes host their connection?
<dawuud>
blame: ephemeral hidden services is a new easier to use api for the tor control port... this should help out wiht lots of onion service applications
<ReactorScram>
dawuud: oh cool I hadn't heard about that
<blame>
voxelot: no proxying! right now we can't use http/ajax due to browser security controls
<dawuud>
i'm going to use it in my golang OnionListener...
anshukla has quit [Remote host closed the connection]
<blame>
voxelot: but I can open a websocket from any page (even a local html file) to anywhere
<blame>
voxelot: so, nodes in the "webrts middleman" p2p network expose iterative seeking + webRTC bootstrapping via websockets
jamescarlyle has joined #ipfs
<blame>
*webrtc
<blame>
You do something as simple as "I want to link to guy with address x, and I have address y" lookup the server at xXORy in the network, and both of you connect to it as the webRTC server.
<voxelot>
right, that is the goal of webtorrent as well right?
<blame>
cool! I'm trying to figure out how they bootstrap webrtc connection? (ie where are the supernodes?)
<voxelot>
or not, you are doing webbrowser as client only
<blame>
once you have connections bootstraped, you have more options.
<blame>
but essentially yes. I'm generally against trying to host a p2p network on a bunch of web-browser sessions
<voxelot>
they are trying to use "hybrid nodes" to punch NATs with STUN servers and just tell each browser how to connect to eachother
<dawuud>
i really like YawningAngel... my favorite tor dev
<blame>
voxelot: but how do they find a hyrid node?
<dawuud>
[tor-dev] RFC: Ephemeral Hidden Services via the Control Port
<dawuud>
lgierth: did you look into patching go-ipfs for use with cjdns yet? i think we need a lot of the same things for the tor onion integration...
<voxelot>
blame: yeah they use the hybrids to route the dht for the "web peer"
<voxelot>
so i think it is closer to your idea
<voxelot>
and you get those hybrids from a list of known
<daviddias>
anyone with openssl knowledge, can decrypt this error - `'error:0D07209B:asn1 encoding routines:ASN1_get_object:too long'` The internet is telling me to go all sorts of places, but none has solved my issue. I'm simply looking to sign an object with:
<daviddias>
var ecdh = crypto.createECDH('secp256k1')
<daviddias>
ecdh.generateKeys()
<daviddias>
var sign = crypto.createSign('RSA-SHA256')
<daviddias>
sign.update(buf)
<daviddias>
var buf_signed = sign.sign('-----BEGIN PRIVATE KEY-----\n' +
<ReactorScram>
wait do you mean infinite (byte strings) or (infinite byte) strings
<blame>
its true for all hash functions
<blame>
hash functions are defined as a mapping of and infinte set to a finite one, ideally where the likelihood a given input will map to a given output is equal for all outputs
<ReactorScram>
blame: Yeah I read it wrong ^
<blame>
(i%10)==5 is true for an infinte number of integers
<ReactorScram>
I thought he meant an infinite string of bytes
<blame>
hmmm, hashing an infinte set is an interesting problem
<blame>
it would need to be encodable somehow
<blame>
then you would hash the encoding
<blame>
so, I could hash unterminating decimals, but not and given irrational number
<blame>
but I would hash pi and e and all deriviatives of them becuase we have encodings for them
<whyrusleeping>
blame gets it.
<whyrusleeping>
pretty bizarre when you think about the implications
<blame>
How to store irrational numbers is actually a problem I have been thinking about
<blame>
I'm trying to make a "future proof" model of computation
<blame>
more as a time-capsule than practically usefull, but some well documented model fo computation + encoding algorithims that could still be understood and utilized in the far future when our ideas of processing speed in availible memory are different.
<krl>
blame: you're the stackstream author right?
<blame>
yeah
<krl>
<3
<blame>
and I need to spend some effor on that
<blame>
I've been pankicking about other things
<krl>
i really want to implement it in js
<ReactorScram>
Maybe that's what that crazy pure FP p2p hash based OS is for
<ReactorScram>
I keep forgetting the name, it's urbit or hoon or somehting
<zorun>
I think it can guarantees amortised O(1) for all operations (insert, search, etc)
<krl>
blame: yes, that's a good starting point
<blame>
aha, you build the datstrutures out of functions!
<krl>
blame: exactly
<blame>
sorry, I was being dense. I don't do pure functional very often
<krl>
so the datastructures could actually be self-describing as well
<krl>
just call resolve x on the map
<zorun>
blame: this is his PhD thesis, I think, I read its book "Purely Functional Data Structures"
<krl>
blame: i did an append-only finger tree on ipfs, documentation TBD...
<blame>
we can also store both values and the function to get the keys in content addressing
<krl>
yep
<blame>
content addressing solves a lot of problems. Only the "tell me where to find the root" problem stands out
<krl>
okasakis book is unfortunately not really complete in the red-black tree implementation
<krl>
it has no remove operation
<krl>
and personally i think 2-3-trees are easier to reason about
marianoguerra has joined #ipfs
<krl>
or b-trees in general
<blame>
And maps done this way are O(lg(n)) lookup, wich is the real lower bound
atrapado has joined #ipfs
pfraze has quit [Remote host closed the connection]
<krl>
finger trees have log n lookup worst case, and constant time for first and lest element
<krl>
*last
<blame>
Thats at the core of a lot of what I am doing with stackstream. It is intended to admit to a lot of lies Computer Scientists tell themselves
<blame>
like accessing memory is O(1) time
<blame>
we like to hide orders of complexity behind "sufficently large/small" constants
<blame>
and when you stop doing that, things like the speed of light enforce that you cannot get to memory faster than O(lg(n)) time.
<blame>
well, RAM style memory
<jbenet>
whyrusleeping: "re caching config": config can be cached BUT (a) writes should _always_ write through to disk, (b) need `ipfs config reload`, (c) after write, reload from disk. (why (c)? because getting a situation where write fails, and state diverges may drive users crazy) (d) (EC) "config has changed" notifications for modules that use the config, so they
<jbenet>
can readjust. this is non trivial and should be future wokr
<jbenet>
josephdelong: do you think we should setup a BTC fund for donating to IPFS development?
<jbenet>
dawuud: no, tor addrs not thin waist. thin waist == ({tcp, udp}/){ip4,ip6}.
<jbenet>
dawuud: sry will CR soon. been busy.
<jbenet>
blame, dawuud: the Web is amazing. please no web hate here. we're working on the web.
<jbenet>
whyrusleeping: Nice. except the "Update branch" button. no rebase. doing it wrong.
<jbenet>
blame: "find someone ... 5 years ago" yes. major problem. will fix this right after network datastruct transports.
<jbenet>
blame: websockets stuff: working on it! and not just websockets, need this to work over regular HTTP longpolling too. multitransport. and not just for signaling-- for arbitrary traffic. see multistream. maybe help there (libp2p) down the road. my plan is to release an extension for "libp2p" that adds more transports to browsers, for those devs who want to.
<jbenet>
(like ble, audio, tcp, udp, and more.)
<jbenet>
whyrusleeping: "infinite byte strings that hash to a given value" wat. i dont understand.
<jbenet>
daviddias: i would but im about to hop on train sry.
<jbenet>
blame: get a github.com/jbenet/random-ideas or github.com/ipfs/notes
notduncansmith has quit [Read error: Connection reset by peer]
bedeho has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping created fix/godeps-1.5 (+1 new commit): http://git.io/vGQdP
<ipfsbot>
go-ipfs/fix/godeps-1.5 be14e5f Jeromy: import sub-lib of iptb so go doesnt whine about importing a binary...
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #1644: import sub-lib of iptb so go doesnt whine about importing a binary (master...fix/godeps-1.5) http://git.io/vGQbt
josephdelong has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
anshukla has quit [Remote host closed the connection]
<ipfsbot>
[go-ipfs] whyrusleeping created hotfix/godeps-pb (+1 new commit): http://git.io/vG7To
<ipfsbot>
go-ipfs/hotfix/godeps-pb 5141fb0 Jeromy: fix import path of generated proto files...
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #1645: fix import path of generated proto files (master...hotfix/godeps-pb) http://git.io/vG7Ty
<whyrusleeping>
i would love it if travis CI would stop doing shit with godeps
<whyrusleeping>
just run the code i give you, nothing more please
<whyrusleeping>
stupid shitty ci service...
* whyrusleeping
rabble rabble rabble
anshukla has joined #ipfs
<ipfsbot>
[go-ipfs] MichaelMure opened pull request #1646: fix swaped argument in rabin.go (master...fix_argument_swap) http://git.io/vG7L7
anshukla has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
* blame
really wishes there was a way to write to ipfs using fuse
<whyrusleeping>
blame: 'ipfs mount && echo "something" > /ipns/local/a'
<blame>
:D
<blame>
is there a way I can get the ipfs multihash key for what I just wrote?
<whyrusleeping>
probably
<whyrusleeping>
'ipfs resolve `ipfs name resolve`'
<whyrusleeping>
'ipfs resolve `ipfs name resolve`/path/you/wrote/to'
anshukla has joined #ipfs
anshukla has quit [Remote host closed the connection]
<blame>
yes! I can work with that
<blame>
more incentive for teh full python ipfs api
chriscool has joined #ipfs
hellertime has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
amstocker has joined #ipfs
marianoguerra has quit [Ping timeout: 256 seconds]
uuidgen has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping closed pull request #1645: fix import path of generated proto files (master...hotfix/godeps-pb) http://git.io/vG7Ty
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
aubrey has joined #ipfs
voxelot has quit [Ping timeout: 244 seconds]
aubrey has left #ipfs [#ipfs]
Eudaimonstro has quit [Remote host closed the connection]
ianopolous has joined #ipfs
chriscool has quit [Ping timeout: 244 seconds]
Eudaimonstro has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
uuidgen has quit []
rabble has quit [Remote host closed the connection]
anshukla has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
voxelot has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping created fix/mock-notif (+1 new commit): http://git.io/vG7oa
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #1647: fix mock notification test (master...fix/mock-notif) http://git.io/vG7Kf
<lgierth>
dawuud: the first cjdns+ipfs thing is Discovery.Cjdns, which queries cjdns' routing table for possible new ipfs peers to dial
<lgierth>
this uses simple ipv6 adresses
<lgierth>
get an fc00::/8 address from cjdns, assume port 4001, shake hands to find out the remote PeerID, then dial /ip6/fc12::3456/tcp/4001/ipfs/<peerID>
<lgierth>
then you can go remove the bootstrap nodes from your config
pfraze has quit [Remote host closed the connection]
<lgierth>
and set Swarm.AddrFilters to forbid ip4 ranges, and ip6 ranges before and after fc00::/8
<lgierth>
regardin NAT, cjdns doesn't try to punch holes
<lgierth>
peerings are either explicitly established over udp, or automatically on the local network
<lgierth>
eventually i want ipfs to be a first-class cjdns node
<dawuud>
hmm interesting
<lgierth>
we need a multi-thing for handshakes and encrypted streams, so that ipfs can use more than our home-brewed secio
<lgierth>
then bitswap could act as a cjdns protocol -- its current protocols are IP6, IPTunnel, and CJDHT
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
josephdelong has quit [Quit: josephdelong]
domanic has joined #ipfs
inconshreveable has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atrapado has quit [Quit: Leaving]
t3sserakt has quit [Quit: Leaving.]
pfraze has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Ping timeout: 244 seconds]
dignifiedquire has quit [Quit: dignifiedquire]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt_ has quit [Quit: Quitte]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mkarrer has joined #ipfs
mkarrer has quit [Remote host closed the connection]
mkarrer has joined #ipfs
mkarrer_ has quit [Ping timeout: 264 seconds]
<blame>
huh, when you are running ipfs and fuse, pasting a /ipfs/hash uri just works
<blame>
+1 awesome
domanic has quit [Read error: Connection reset by peer]
<lgierth>
in irc?
<blame>
in a browser
notduncansmith has joined #ipfs
<blame>
it interprets the '/stuff' to think you mean a file
notduncansmith has quit [Read error: Connection reset by peer]
<blame>
and then it maps to the right file
<lgierth>
heh
<lgierth>
turns into file:///ipfs right?
<blame>
yup
<blame>
works for ipns too
akhavr has quit [Ping timeout: 252 seconds]
<blame>
does ipns work well yet?
Leer10 has quit [Ping timeout: 244 seconds]
<blame>
More usefull question: what is stored here? /ipns/QmWgzJk4n5QRfbgXRDjf5emUbdGr8trM1rRZacWVadScFY/
Leer10 has joined #ipfs
<lgierth>
ipns doesn't have ordering at the moment, from what i understand, so you can and will get old objects for an ipns namespace
<blame>
blegh
<lgierth>
record valditity is in the works though, which will allow for time and signature-based ordering
<lgierth>
records will be first-class objects so you can pin them
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
bedeho has quit [Ping timeout: 250 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has quit [Remote host closed the connection]