<voxelot>
plus nodejs is brilliant, and browserify
<rsynnest>
i dont understand what you mean by the "right to own that data in the first place"
<voxelot>
imho, and my inner procedural language college self cringes when i think about how i code in all js these days lol
<rsynnest>
can you elaborate a bit
<rsynnest>
the only solution I can think of to the server side paradigm is to have local decryption happen for each user
<voxelot>
yup!
<rsynnest>
so that way you could share your entire encrypted site, but only the people with the right hash for their assigned content could decypt after downloading the content locally
kvda has joined #ipfs
<rsynnest>
does ipfs do something like that already?
<voxelot>
yeah ipfs does encrypt and if you want to go ham, check out cjdns
devbug has joined #ipfs
<lgierth>
no it doesn't encrypt objects
notduncansmith has joined #ipfs
<lgierth>
it encrypts the network connection between nodes but *that's all*
notduncansmith has quit [Read error: Connection reset by peer]
<kvda>
how atomic can you get with text data? are you able to write 'plugins' of sort that would process text content in a different way
<lgierth>
it will be able to encrypt objects soon-ish
erikd has joined #ipfs
<voxelot>
lgierth: cjdns correct?
<voxelot>
ipfs white paper speaks of object encryption, not sure if we are there yet tho
<lgierth>
no i mean ipfs. it doesn't encrypt object yet
<voxelot>
ohh gotcha, thanks for the clarification
pfraze has joined #ipfs
ke7ofi has joined #ipfs
devbug has quit [Ping timeout: 265 seconds]
thomasreggi has joined #ipfs
mkarrer has left #ipfs [#ipfs]
domanic has joined #ipfs
wasabiiii has quit [Quit: Leaving.]
patcon has quit [Ping timeout: 260 seconds]
pfraze has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dPow has joined #ipfs
wasabiiii has joined #ipfs
voxelot has quit [Ping timeout: 264 seconds]
pfraze has joined #ipfs
patcon has joined #ipfs
<rsynnest>
voxelot: This is basically what I'm thinking for distributing secure data objects: http://i.imgur.com/dZCdiht.png
<rsynnest>
Assume the object is a user's encrypted password.
<rsynnest>
In this case all the user will need is 2 things:
<rsynnest>
1. A Unique ID (public key) to identify which secure data objects belong to them, so they can retrieve only those objects from seeders (without having to download an entire password database for example)
<rsynnest>
2. A private key to decrypt that the data object locally.
<rsynnest>
This could be made user friendly by making it part of the IPFS protocol, that way each user only needs one public and one private IPFS key, and all IPFS servers will use the same standard to encrypt the data.
<rsynnest>
This way seeders/peers will be able to host sensitive server side data objects, but won't have access to any decrypted data or private keys.
<rsynnest>
Using existing encryption keys such as GPG or SSH could also improve usability.
<wasabiiii>
What was the reasoning for not having a key/value set per object, for metadata?
<rsynnest>
how do you mean?
Leer10 has joined #ipfs
<drathir>
cjdns ip for auth is nicer...
<wasabiiii>
Well, objects have data and links; but no standard way, as far as I can tell, to deal with metadata. Such as Content-Type.
<drathir>
also server need handle all decrypt encrypt from all clients...
ansuz has joined #ipfs
<rsynnest>
cjdns is fine and would work in tandem with this. this wouldn't be for all data
<spikebike>
wasabiiii: generally with CAS (content addressable storage) you use the cas build a filesystem, not try to mix key/balue and cas together.
<drathir>
sounds like makin two times the same work...
<rsynnest>
this is for distributing sensitive data like passwords to allow seeders to host an entire site, not just public data
<rsynnest>
cjdns is more like https as far as i can tell
<wasabiiii>
spikebike: file systems tend to have metadata associated with content. In HTTP-land that's like, Content-Type. But for Unix, it might be posix attributes, or posix acl. I see that hte file system objects are using magic in data[]
rdbr has quit [Ping timeout: 240 seconds]
<spikebike>
rsynnest: what if two clients want to read the same encrypted blob?
<jbenet>
hey everyone, i imagine _LOTS_ of questions got answered here today. it would be really useful to get the help of those who had questions to put them into https://github.com/ipfs/faq/issues -- that way others can find them easily
<spikebike>
wasabiiii: unix filesystems basically use a file for the directory info (list of metadata) which of course can be stored in the CAS
<rsynnest>
spikebike: as long as they both have the key they can both access the data
inconshreveable has joined #ipfs
inconshreveable has quit [Read error: Connection reset by peer]
<wasabiiii>
spike, when you request a file from the gateway, how does it know to set the Content-Type of the response?
<spikebike>
rsynnest: sure, the directory/file has a link to all the things in the dir, which of course includes the checksum for each object
<drathir>
yea but it also prove You its You that could be easly used for auth and is not needed extra crypto power...
<jbenet>
spikebike rsynnest: on enc/decryption -- we'll also have a capabilities-based encryption/decryption model built in soon enough.
<jbenet>
Tahoe LAFS has paved the way there and am in contact with zooko and warner about it.
inconshreveable has joined #ipfs
<wasabiiii>
Extension/probing, etc?
<warner>
:)
<jbenet>
hey warner! o/ -- i want to collab on making a "Tahoe-CAPS" cross-platform lib for capabilities. i think many info-systems could really use something like this.
<jbenet>
(like, something similar to djb's NaCl libs, but for the tahoe/erights model (and maybe other models, too, if relevant).
<drathir>
the builid in gpg encryption of files ofc would be nice feauture...
border_ has quit [Quit: Konversation terminated!]
<jbenet>
@Everyone: on the "getting things onto the FAQ" above o/ o/ o/ -- would really help to get any help there. am sure not all askers will transfer things, so any help making https://github.com/ipfs/faq/issues more informative would be great
<spikebike>
wasabiiii: just like on a unix file system the mimetype, acl, or whatever can query the directory. Works well for symbolic/hard links as well
* warner
nods
<jbenet>
(would be a big win if could be layered nicely over REST apis too. not sure what that might mean yet, just that people on the web need to move to using caps instead of silly permission tables)
<warner>
it's closely tied to your encryption model, of course
<Xe>
capabilities?
<drathir>
but also in theory even encryption isnt needed bcof cjdns ip based acl-s is able to serve specific data bu http server only to specified users, w/o using extra power... ofc never too much encryption, and for security one more lvl of security always is good thing...
<drathir>
builid in acl-s support and gpg both are nice ideas in my opinion ofc if that possible...
hyperbot has quit [Remote host closed the connection]
hyperbot has joined #ipfs
<pjz>
jbenet: what's the timeline on node-encrypted data?
<NfNitLoop>
pjz: *wave*
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<pjz>
NfNitLoop: heya
<NfNitLoop>
small world. s/world/internet/
<pjz>
NfNitLoop: it is!
<pjz>
NfNitLoop: I kinda boggled when I finally got a chance to look at the tab I'd opened from the link you posted on facebook and it was an ipfs link
<NfNitLoop>
boggled why?
<pjz>
NfNitLoop: just surprise
<pjz>
NfNitLoop: ipfs isn't hugely well known (though it's getting there!)
<NfNitLoop>
I'd watched the talk a while back. Came up again on /., which I was nostalgic enough to read again this week. Thought I'd give it another look.
<pjz>
ah
<NfNitLoop>
One problem I see so far is that people can sortof see your browsing history by figuring out what objects your node has. ipfs doesn't seem to have the deniability(?) that FreeNode did.
<NfNitLoop>
s/Node/Net/
<spikebike>
NfNitLoop: from timing or is there a query for node local storage only?
<NfNitLoop>
I assume that you can find who has objects via the DHT. So, you find some controversial object, ask who has it, find their IP address, and blackmail for fun and profit.
<NfNitLoop>
Or just relase the data for trLOLs.
<spikebike>
yeah, IPFS is definitely not a tool for anonymously browsing kiddie porn
<NfNitLoop>
Or topics likely to get you killed in backwards countries.
voxelot has joined #ipfs
<spikebike>
does IPFS nodes without an object find nodes with it cached, or just pinned?
<NfNitLoop>
wouldn't be very redundant if people had to manually pin everything...
<spikebike>
true, but not like caches don't expire as well
<spikebike>
pinned = cached and doesn't expire
<NfNitLoop>
Yup, I know.
<pjz>
I'd thought about that as well, and noted that there's also no option to just 'help out' and offer to cache stuff
<pjz>
which would mitigate the 'I was browsing that' function
<spikebike>
not sure a lawyer would care about such mitigation
<pjz>
by being able to say 'well, I was free-caching'
<NfNitLoop>
Yeah, in FN, you would just end up passively caching stuff. Too bad all that extra indirection made it dog slow. :p
<spikebike>
freenode's deniability was never tested in court afaik
<NfNitLoop>
s/node/net/
<spikebike>
people could run a daemon that pins popular content
<pjz>
sure
<NfNitLoop>
How do they know what's popular?
<pjz>
trending news
<NfNitLoop>
I guess you could listen for DHT queries...
<spikebike>
nodes could track dht lookups
<pjz>
stuff that gets looked up a lot
<pjz>
and when to unpin it?
<pjz>
I mean, sure it's popular today, but in a month no one will remember it
<NfNitLoop>
pinning might need a bit more granularity. "Pin this for up to 15 days. Renew if it gets accessed."
<spikebike>
random numbers... if you want to donate x bandwidth and y disk just pin x% of lookups (like maybe 1%) and expire things as it gets full
<voxelot>
objects don't disappear if they aren't pinned
<voxelot>
you have to run gc or just adding it will keep it there forever atm
<spikebike>
that way popular content gets caches on a large number of nodes, and unpopular on many less (but non-zero) nodes
<pjz>
voxelot: but you're pretty forced to run gc if you run out of disk
<voxelot>
true
<NfNitLoop>
^-- that.
<pjz>
voxelot: also, what you said, but especially the 'atm' part
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
then again why bother?
<voxelot>
yeah not sure if there are any plans to automate the gc
<spikebike>
why not just cache the stuff your read? If nobody caches it, maybe it's not worth it?
<pjz>
it might get automated a layer up, but some app that runs on top of ipfs
<pjz>
s/but/by/
<pjz>
I'm interested in the private data bits as well
<spikebike>
ideally with a proxy or browser plugin so whatever you surf is automatically cached in ipfs
<pjz>
encrypted with my node private key, so nodes that I've "friended" or whitelisted somehow can read the private data, but it's so much garbage to anyone else
<spikebike>
pjz: tricky, especially since then people you've friended could forward
<spikebike>
seems better to have an ACL of public keys, and a encrypted session key
<spikebike>
so anyone on the whitelist can get a session key as needed, but anyone not on the ACL will be locked out
<spikebike>
similarly files aren't actually encrypted with public keys
<spikebike>
pgp and friends just encrypt a symmetric key with the public key, then encrypt the file with the symmetric key
<pjz>
spikebike: any encryption method can fail if you share with someone untrustworthy
<pjz>
spikebike: so that failure mode is acceptable
<spikebike>
pjz: right, but it's nice if it's non-trivial for the entire planet to get yoru content
<spikebike>
if you send a friend "hey look at this cool...." it's fairly human to forward it
<spikebike>
thus 90% of facebook/g+ content
<spikebike>
for similar reasons "joining" is usually via a one time use token
<spikebike>
not a password that could add N people
<spikebike>
just arguing for the invite model not the you know a secret model.
<pjz>
spikebike: ah, but unless you've 'friended' nodes those other people have access to, they won't be able to decrypt it
<pjz>
spikebike: oop, I got the encryption wrong. I think it's more like 'they're friended so I share this with them by encrypting it with their public key so only they can read it'
<spikebike>
ah, yeah, that sounds good
<pjz>
spikebike: what I get for trying to describe encryption protocols this late at night :)
<pjz>
spikebike: faq #4 and #6 are built around this idea though, I think.
<spikebike>
encrypting before writing to IPFS seems best for #4 (for now)
<voxelot>
have you guys tried cjdns yet too? you can config who connects to you. ipfs+cjdns is just kreygasm
simonv3 has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
I'm going to have to dig around in the API docs, but I think access to the DHT (for finding peers and key/value store), bitswap, and ability to open a socket (through IPFS or not) would be enough for some cool p2p related apps on top of ipfs
<voxelot>
spikebike: think about what happens when we get this all to run in the browser with out a dl or plugging or daemon, every user using p2p without even knowing it
<voxelot>
we can haz all the web apps (in theory)
* spikebike
shudders, but whatever floats your boat
<spikebike>
i'm all for an IPFS proxy that publishes what I download
<spikebike>
not so much for running anything sensitive in javascript that's hard to tell what you are actually running
<pjz>
It'd be more useful today, I think, if it were behind a shared proxy
<pjz>
web proxies would then be sharing popular content automagically
<spikebike>
ya
<spikebike>
although only for http content not https
<pjz>
web proxies are really only useful for http anyway
<spikebike>
well it's a crude measure
<spikebike>
sadly there's little differentiation between a secure connection and secure content
<spikebike>
thus ssl makes caching cross clients dangerous
<pjz>
hmm
<pjz>
what would be interesting woul dbe websites tht pinned themselves in ipfs
<spikebike>
pjz, er, pinning is client side
<spikebike>
so I'm not sure what you mean
notduncansmith has joined #ipfs
<pjz>
hmm
notduncansmith has quit [Read error: Connection reset by peer]
<pjz>
pondering ways for http websites to hint at ipfs
<spikebike>
sites publishing content in ipfs is already happening
<pjz>
right
<spikebike>
but yeah it would be cool if there was a standard tweak to say robots.txt or an entry in the dht that made it fast and cheap to detect a regular site has content in ipfs
<pjz>
IPFSTag: Qm24875joajv98j...
<pjz>
in the HTTP headers
<spikebike>
then what though?
<pjz>
then the browser would know to go for content through ipfs instead of http
<pjz>
if it runs a local node
<spikebike>
that's the index.html file you can get from IPFS, but what about the rest of the content?
<pjz>
right
<pjz>
all the links would still be http instead of ipfs
<pjz>
and even HEAD can get expensive after awhile
<spikebike>
maybe a DKIM/DANE/XMPP like dns record that says ya, content under this domain supports IPFS
<pjz>
although maybe the answer is instead to redirect to an ipns rot
<spikebike>
maybe a few valid values like, all, some, always, fallback, etc.
<spikebike>
preferably after someone else agrees that it's attractive
amstocker has quit [Ping timeout: 246 seconds]
<jbenet>
spikebike: sounds good. btw, you'll be excited to hear about what daviddias and i have been cooking up with libp2p. we'll make a nice & pretty announcement soon, but basically, making a set of standard interfaces for p2p systems. + kick it off with impls in both node + go. (transports, discovery, peer routing, dht, signed records, ....)
hyperbot has joined #ipfs
<spikebike>
jbenet: sounds promising, I've been of #ipfs for a bit and havne't been tracking p2p.
<pjz>
I keep wondering the best way to facilitate the goals of http://unhosted.org/ with ipfs
<jbenet>
pjz: it basically turns the "red webapp box" into a "green webapp box" where it lives in the users browser too
* spikebike
bikes hope, be online in 20 ish
<jbenet>
spikebike: gav wrote that up after a chat with me a long time ago. we were going to work together on it
<jbenet>
(but we both got busy)
phorse has joined #ipfs
voxelot has quit [Ping timeout: 246 seconds]
<wasabiiii>
Any docs yet on the various communication protocols used? I see in the paper that you've got nice interfaces defining the various operations between nodes... but no protocol level stuff.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<wasabiiii>
(considering the difficulty in writing a library in my programming language of choice)
<erikd>
wasabiiii: what language is that?
<wasabiiii>
.net (C#)
<erikd>
oh, i'm sorry
<wasabiiii>
Yes. Let's have a language fight. That seems like a lovely idea. :)
<wasabiiii>
(not)
<jbenet>
yeah, please no {language, editor, platform} choice fights in here.
<jbenet>
wasabiiii saw the specs repo? sorry it's not more clear, our use of protobufs and layered protocols make it so it's not trivial to write a "byte-for-byte" rep.
<wasabiiii>
Oh. There we go. I stumbled across a different 'specs' repo that was empty at one point.
hyperbot has quit [Remote host closed the connection]
<wasabiiii>
Sweet. Thanks. Will be reading the various specs. Also, your project rocks.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jason____ has joined #ipfs
jason____ is now known as jasongreen
<jasongreen>
Getting an odd error when I try to start ipfs daemon
<jasongreen>
error from node construction: 1: listen tcp6 [::]:4001: socket: address family not supported by protocol
<jbenet>
jasongreen what's your architecture?
<jasongreen>
running 0.3.8-dev on an RPi B+
vx has joined #ipfs
<jasongreen>
downloaded the zip file from ipfs.io
<vx>
Hi
<jbenet>
jasongreen does the RPi B+ doesnt support ipv6?
<wasabiiii>
no ipv6 in kernel?
<jbenet>
s/doesnt/not/
<jasongreen>
not sure
<jbenet>
vx hi /o
<jasongreen>
I haven't done a dist upgrade in a while
<jasongreen>
trying now
<jbenet>
jasongreen: you can remove ipv6 by removing it from the config
<jbenet>
but would be nice knowing if archs out there will fail with this-- we should handle it better
<vx>
What exactly are you talking about?
<jasongreen>
where's the config file?
<jbenet>
jasongreen: regardless, would you mind filing an issue on go-ipfs? https://github.com/ipfs/go-ipfs/issues this shouldn't give you an error when you start, no matter what arch/kernel support for ipv6
<vx>
Ahh
<jasongreen>
sure
<jbenet>
jasongreen: in ~/.ipfs/config or: ipfs config edit
<jbenet>
vx: bug from 0.3.8-dev on RPi B+ with ipv6 addr
ELFrederich has quit [Read error: Connection reset by peer]
<vx>
Ah nice thanks
ELFrederich has joined #ipfs
<jasongreen>
remove the ipv6 line from the swarm section?
<vx>
Is there any special relevance to the RPi for ipfs?
<jbenet>
jasongreen yep
<jbenet>
vx: people use ipfs on pis quite a bit
<jasongreen>
and voila
<jasongreen>
Thanks
<davidar>
RPi supports ipv6, but it's disabled by default
<davidar>
But yeah, it really shouldn't be throwing an error
hyperbot has joined #ipfs
<jasongreen>
How do you find ipfs content without knowing a particular hash?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig>
o/
<jbenet>
jasongreen: there'll be search eventually but for now can sit on the dht and listen for hashes. some content will be encrypted though
pfraze has quit [Ping timeout: 244 seconds]
pfraze has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed dev0.4.0 from ff3ad99 to 6e5d1df: http://git.io/vLnex
<ipfsbot>
go-ipfs/dev0.4.0 e80e5be Tommi Virtanen: pin: Guard against callers causing refcount underflow...
<ipfsbot>
go-ipfs/dev0.4.0 3e77c13 Tommi Virtanen: sharness: `fusermount -u` is the documented way to unmount FUSE on Linux...
<ipfsbot>
go-ipfs/dev0.4.0 529d3f1 Tommi Virtanen: sharness: Use sed in a cross-platform safe way...
Pharyngeal has quit [Ping timeout: 272 seconds]
* spikebike
looks for ipv6 rasp bi issue
<zignig>
*pi <ahem> ;)
Pharyngeal has joined #ipfs
<spikebike>
a, found it
<zignig>
what was it.
<zignig>
?
juul has joined #ipfs
captain_morgan has joined #ipfs
<spikebike>
vx: Rasp Pi is particularly nice for avoiding the numerous issues related to running p2p on phones and related mobile devices
<spikebike>
vx: something like this running IPFS for example
<juul>
so, if i add a file to ipfs does it get stored by other ipfs nodes as well? or does that only happen if they pin it?
legobanana has joined #ipfs
<pjz>
juul: it happens if they request it
<juul>
pjz: but i understand that it will get garbage collected fairly quickly after?
<spikebike>
juul: request or pin
<spikebike>
juul: no garbage collection automatically at the moment
<juul>
so every file you ever request is automatically also re-hosted from your node until you decide to garbage collect?
<davidar>
yes
<pjz>
juul: gc is currently manual-only; the future may hold something different, probably related to how big a cache you wish to maintain. Pinned items are ineligible for gc, so that's another option, correct.
ELFrederich has quit [Read error: Connection reset by peer]
<vx>
spikebike: thank you, that makes sense
<juul>
btw i'm getting "internalWebError: context deadline exceeded" from ipfs.io right now
<vx>
To all of you: is there a place where the structure of the repo is documented? I would really like to commit to this, but this is my first open source project and I dont get where what part of the project is
<spikebike>
vx: my hope is that p2p matures enough so a rasp pi+ storage and network running 24/7 could earn credit so a mobile client could check in with minimal battery/power/bandwidth and use those p2p resources that were earned
<juul>
it might be useful to be able to tell an ipfs node "use up to this much storage and bandwidth and prioritize filling it with content from these users prioritizing by some weighted popularity/newness measure"
<juul>
but i'm sure you've thought of that already
<davidar>
jbenet: so, I think I ended up coming to the conclusion that ipld = json + (user defined) types + merkle magic, yes?
<spikebike>
juul: yeah, I've seen that mentioned as planned
<spikebike>
in a practical sense I don't think it's quite needed, there's not that much in IPFS and if it only consumes disk you actually asked for
<davidar>
daviddias: thanks for your patience with all my questions btw :)
<spikebike>
so sure if you want to walk every geocities website that might fill a disk
<juul>
spikebike: it's really important since lots of people have servers that they'd use to donate resources to ipfs but they'd never request anything form those servers
<jbenet>
davidar: yeah roughly. i wanted it to be fully json-ld originally, but wasnt aware how much of the structure it forced
<zignig>
spikebike: the planet file from open street maps is there too.
<spikebike>
juul: ya, but there's no way to automatically donate resources at the moment
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
zignig: ah, very cool, didn't know that
pfraze has quit [Remote host closed the connection]
<zignig>
i
<juul>
ok
<zignig>
've been looking at some GIS stuff into ipfs , tiles at the moment.
<davidar>
jbenet: the other question I had was, why not utilise cbor tags instead of @context keys?
<spikebike>
zignig: yeah, it's a good fit, I have some suers that are heavy into related gis stuff and openstreetmap. Currently they are storing tiles in hadoop
<jbenet>
davidar: im wary of any cbor magic, i want to keep 1:1 with json.
<jbenet>
davidar: point is to make it easily human readable.
<jbenet>
davidar meet mikolalysenko
<davidar>
jbenet: could I convince you on yaml types then? :)
<jbenet>
oof i dont even know what those are.
<mikolalysenko>
hello!
<jbenet>
davidar: i like how yaml looks. the format is scarily huge.
<jbenet>
davidar: well, it is. but every type system has this hack, think about how classes/types are implemented in datastructures in go, python, ...
<spikebike>
jbenet: cool, reading for later, thanks.
<davidar>
I've already uploaded a recent planet.osm.pbf to ipfs
<jbenet>
in c/c++, you have the bytes, and a pointer to the class. in python something similar, with classes pointing up the hierarchy, etc.
<mikolalysenko>
davidar: cool!
<davidar>
It's linked in that issue
<mikolalysenko>
ah, I was busy all afternoon
<mikolalysenko>
but that is great!
<mikolalysenko>
next step is to turn that into some tile data
<jbenet>
anyway, dont really have time to discuss LD stuff atm :/ but yeah we should get this right so feel free to open an issue somewhere to discuss it. i'll TAL monday/tue
<davidar>
jbenet: OK, will do
<davidar>
Mikola: yes, do you have experience with generating tiles?
<davidar>
I've tried contacting the OSciM folks about what they do, but haven't had much luck yet
<mikolalysenko>
davidar: not yet, but I can get in touch with people at mapbox who do
<davidar>
Awesome :)
<atgnag>
So, what's on the roadmap for IPFS?
<atgnag>
I know there's IPNS. What else?
<spikebike>
encryption
<mikolalysenko>
also even cooler would be to do direction queries in ipfs
<spikebike>
limiting resource use and garbage collection
<mikolalysenko>
I bet we can do this all in the client once all the relevant data structures are stored in ipfs
<davidar>
Yeah, that would be cool
<davidar>
Yeah, that's the plan :)
<davidar>
Well, not so much a plan as wouldn't it be cool if... :)
<zignig>
getting the osm database into an indexed data blog in ipfs would be cool.
<mikolalysenko>
it isn't too far off
<mikolalysenko>
zignig: davidar already got it in there!
<zignig>
mikolalysenko: yes very cool, seen it , nice work davidar
<jbenet>
atgnag there's a lot, look at the issues. we should make nice "roadmap issues" listing things for upcoming versions. i dont have the bw to do this atm, but can try in coming week or 2
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has joined #ipfs
sharky has quit [Ping timeout: 272 seconds]
Not_ has joined #ipfs
<Not_>
hi
<Not_>
what is already possible to do with ipfs?
<Not_>
is it usable for uploading html\css?
<davidar>
zignig: what do you mean by indexed data blog?
<atgnag>
Not_: Share files and static websites.
<atgnag>
That's all that I know of so far.
<Not_>
who programs ipfs?
<davidar>
Not_: anything you can do with github pages
<davidar>
and more cool stuff when ipld lands
<Not_>
jbenet: or whyrusleeping?
<Not_>
ipld?
<Not_>
whats that
<atgnag>
ditto
<davidar>
Not_: "dump your (json) database into ipfs"
<jbenet>
atgnag it does. it's how git, bitcoin, bittorrent, fossil/vent, itahoe lafs, and so many other systems work (oh and google docs!)
<atgnag>
All right.
<atgnag>
So, should I be able to create an imageboard eventually?
akhavr has quit [Remote host closed the connection]
akhavr has joined #ipfs
legobanana has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<kvda>
@jbenet have you read any of Ted Nelson's work?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has quit [Remote host closed the connection]
<jbenet>
kvda: i've _met_ Ted :)
<kvda>
lucky man!
<jbenet>
he showed me the recent transclusions demo, pretty cool.
<kvda>
i've had similar ideas of wedding torrent and git upon reading his work
<kvda>
but your work this is beyond what i could, i'm glad that somebody capable is finally getting to it :)
<jbenet>
kvda: dont sell yourself short!
<kvda>
will you attempt to do anything like that with ipfs?
<jbenet>
anything i can do you can do too
<kvda>
haha no really, i would be able to do this after a couple years off work :) i'd need get up to scratch with a lot more stuff then im currently familiar with
<jbenet>
the transclusions? sure anyone can do them on top. you just need a nice visualizer.
<kvda>
is text 'atmoicity' part of the spec at all? are you thinking of breaking up texts into smaller 'objects' ie. so that texts are generlly just an order of 'objects of text'
<kvda>
ah great to hear :)
<atgnag>
I'd love a DVCS with a DHT.
<kvda>
*atomicity if that's even a word, i feel ok with using it since ted came up with so many
inconshreveable has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
erikd has quit [Quit: disappearing into the sunset]
nessence has joined #ipfs
<kvda>
jbenet, by atomicity of text i mean, that storing an essay as one large object means that you can'd do things like translusion, but if you figure out a way of breaking that text up you'd be able to reference certain parts of a text within the protocol itself
<kvda>
does that make sense?
<jbenet>
kvda: so, i disagree with the idea that you cant do it. it's trickier, but so is storing text like that.
<jbenet>
in any case, ipfs lets you structure data, so you can do it.
<jbenet>
and federated wiki is doing it at the paragraph level.
mildred has joined #ipfs
nessence has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nessence has joined #ipfs
nessence has quit [Remote host closed the connection]
nessence has joined #ipfs
nessence has quit [Remote host closed the connection]
nessence has joined #ipfs
jhulten has quit [Remote host closed the connection]
jhulten has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jhulten has quit [Ping timeout: 250 seconds]
inconshreveable has joined #ipfs
inconshreveable has quit [Read error: Connection reset by peer]
inconshr_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ygrek has joined #ipfs
captain_morgan has quit [Ping timeout: 240 seconds]
lidel` is now known as idel
voldial has quit [Quit: WeeChat 1.2-dev]
domanic has quit [Ping timeout: 255 seconds]
rossjones has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<davidar>
jbenet: is federated wiki running on ipfs yet, or is it just planned?
<jbenet>
has been discussed. we did a bit of integration (putting static content on ipfs) but not the whole thing. it needs to be browserified first
ygrek_ has joined #ipfs
ygrek has quit [Ping timeout: 240 seconds]
<kvda>
thanks jbenet :)
<kvda>
wait a sec, this is already implemented in a way ' it follows the unixfs data format when doing this. what this means, is that your files are broken down into blocks, and then arranged in a tree-like structure using 'link nodes' to tie them together'
<kvda>
it's just that when it comes to text the block should be a paragraph
<spikebike>
but paragraphs don't have a maximum size
notduncansmith has quit [Read error: Connection reset by peer]
atomotic has joined #ipfs
Severak has joined #ipfs
<Severak>
morning all
<kvda>
that abstraction is so close that it's worth considering though, 'your text files are borken down into paragraphs which are broken down into blocks' :)
<kvda>
and because it's text you could be smart about when you break a text into block. ie end of sentence
<kvda>
rather that mid way through one
<kvda>
*than
<kvda>
(thinking out loud, just very excited by this!)
<jbenet>
davidar: is there a way to use matrix.org to make a web bridge for users who want to join in here? (i.e. is there an irccloud-like thing from matrix)?
<jbenet>
Severak hello
<Severak>
is there some use-case document for ipfs? I am interested in it, but there is no obvious way to use it in real life.
<davidar>
jbenet: yes, I'm using vector.im right now
<jbenet>
kvda: i'm not convinced the "logical" break of text (paragraphs, lines, chars, ...) is the best place to do the "implementation" break of text (data blocks optimized for perf)
<kvda>
ah right makes sense jbenet
<jbenet>
kvda: and im also not convinced that we cant figure out the diffs and so on from those blocks.
<davidar>
jbenet: is that what you meant?
<kvda>
but actually it does necessitate then necessrily, there's a step where paragraph break down happens, and then those are broken into blocks per usual
<jbenet>
like, a lot of the Xanadu and Xanadu-esque stuff falls on "deaf ears" with engineers because it's absurd from a perf standpoint. btu the thing is these are two realms, you can do implementations one way to support a high perf version of a virtual other thing.
herch has joined #ipfs
<Severak>
to say it in another way - i am not interested in developing of ipfs, i am interested in using it.
captain_morgan has joined #ipfs
marianoguerra has joined #ipfs
marianoguerra has joined #ipfs
<kvda>
jbenet what would be difference of diffing a one file text documents vs a paragraph breakdown version? when things move from one paragraph to another is the ovious one
<davidar>
Severak: do you ever look through your bookmarks and hit a bunch of 404s?
<davidar>
or swear at github going down when it gets ddos'd?
<jbenet>
Severak: "there is not obvious way to use it in real life" did you watch the demo?
<kvda>
jbenet are there any other places that you know where dicussions like this might be suitable? maybe a group that'd be interested in exploring these kind of idea on top of IPFS...
<davidar>
jbenet: I've been meaning to bring that up, it might help to have some of that in text on ipfs.io
<jbenet>
Severak: what sort of user? just for personal files? application? or what?
<davidar>
people aren't likely to watch a 30min video if the intro text doesn't grab them
<jbenet>
davidar: yeah for sure. i think the blog may be a good place to put a lot of this
* Severak
doesn't watched video
<jbenet>
kvda: not to my knowledge
<davidar>
jbenet: particularly "what can I actually use ipfs for that's better than the current state of things", and "how is this different to freenet, etc"
* Severak
likes more text introductions
<kvda>
ok i'll keep bringing it up here then, maybe a kindred spirit will chime in eventually :p
<davidar>
seems to be the two big sources of confusion for people
<davidar>
jbenet: zeronet.io seems to do that first bit quite nicely
<Severak>
jbenet: probably personal files, or how to host some of the sites i own
<davidar>
kvda: you can also submit an issue on an appropriate github repo
<davidar>
like ipfs/notes or ipfs/faq or something
<kvda>
i think i might submit a job application first
<jbenet>
davidar: yeah sounds good to me
<kvda>
thanks though davidar :)
* Severak
found quick start document
<davidar>
jbenet: fyi the matrix.org people are also interested in integrating with ipfs
<jbenet>
Tv` has quit [Quit: Connection closed for inactivity]
<jbenet>
davidar: do they need a place to put logs?
notduncansmith has joined #ipfs
<davidar>
jbenet: i'm not exactly sure what level of integration they're looking at, but I can ask
notduncansmith has quit [Read error: Connection reset by peer]
<herch>
Hi guys, real newb here. I am here because of that 18min video which I thought was just awesome!
<herch>
I will like to know more about how things work underneath. I don't have much exposure to how bitcoins or torrents or other p2p systems work.
<herch>
any pointers for getting me started?
<davidar>
jbenet: although I don't think logs can be made publicly available due to no-log rules in some rooms
<herch>
I want to get up to speed with current development on IPFS
<jbenet>
davidar: ah i see
<jbenet>
hello herch: welcome :)
<jbenet>
herch: we have all our repos over at github.com/ipfs -- we really need "a quick start for contributors" document or something. maybe as you learn about ipfs, make one?
Pharyngeal has quit [Ping timeout: 272 seconds]
<herch>
@jbenet: sure! would love to do that. But right now I am trying to understand basics of blockchains. Very preliminary things.
<jbenet>
i think important will be to pick the area you want to contribute, we have a go implementation, a node implementation in the works, many js websites, lots of efforts to polish UX for various use cases, lots of docs to write + improve, bindings in more languages, etc ....
<jbenet>
herch: suggest starting with the paper, linked from ipfs.io
<herch>
jbenet: sure, I will download that paper.
<kvda>
what thing need most UX/front-end code attention jbenet ?
<herch>
jbenet: thanks for that pointer?
chiui has joined #ipfs
Pharyngeal has joined #ipfs
<jbenet>
kvda: ooof. i think lots of stuff, so we have some bugs on our website (like links broken i think). the webui needs improvement. we could make more little apps using the api (like a standalone dag explorer)
atomotic has quit [Read error: Connection reset by peer]
jhulten has joined #ipfs
<kvda>
oh you're using Polymer on starlog
<kvda>
thanks for these jbenet, i'm rushing of to dinner, but i'll keep going through these tomorrow. i can help with the website things to begin with
<davidar>
kvda: also making the ipfs.io landing page a better introduction for new users :)
<kvda>
polymer used to have absymal performance, i'm not sure if things changed in the last 6 months or so though
<kvda>
davidar, cool ok that might come a bit later as i get familiar with ipfs myself and if nobody beats me to it..
jhulten has quit [Ping timeout: 244 seconds]
rdbr has joined #ipfs
devbug has joined #ipfs
domanic has joined #ipfs
akhavr1 has joined #ipfs
<Luzifer>
jbenet: Please refer to the issue I linked to you. https://github.com/Luzifer/gobuilder/issues/62 - No, it is not "fixed" yet cause I haven't had the time to work on this… There is a first stub of the new "build-script" but its far from being "done".
<rdbr>
good morning
akhavr has quit [Ping timeout: 264 seconds]
akhavr1 is now known as akhavr
<davidar>
multivac: good morning <action> waves to $who
<multivac>
Okay, davidar.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
devbug has quit [Ping timeout: 264 seconds]
<jbenet>
Luzifer: oh i was merely wondering because i saw a build happen on cmd/ipfs but figured someone triggered it manually
<Luzifer>
yeah… most likely a bot marauding through all URLs, even the blocked ones in terms of robots.txt
<Luzifer>
(crawler sucks)
syn5555 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
_whitelogger has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rendar>
ipfs is a very cool idea, i had a very similar one time ago, but if the server has the file name saved in his hash form, e.g. /directory/A6/BBF4AD82BC[...] how the client (for example an html page) can retrieve that content by *not* knowing the file, but only its name?
<Quiark>
davidar, yeah I got there but still, at what point should the transition from \ to / occur ... perhaps in the adder just before adding it to IPFS database?
jhulten has joined #ipfs
<rendar>
davidar: yes
<davidar>
rendar: can you rephrase? I'm not sure what you're asking
<davidar>
i don't work on go-ipfs, so I'm just guessing :)
<Quiark>
rendar, you need to get the pointer to the up-to-date version firstn
<rendar>
Quiark: hmm
<davidar>
rendar: oh, you're asking how to handle modifications?
<rendar>
davidar: sorry i try to explain better, when a client want a resource from an http server, it simply sends GET /directory/file_name, but in the case of ipfs i should go with hash instead of 'file_name', but i don't know that hash before having the actual file
jhulten has quit [Ping timeout: 250 seconds]
domanic has quit [Ping timeout: 272 seconds]
doublec has quit [Ping timeout: 255 seconds]
Pharyngeal has quit [Ping timeout: 272 seconds]
doublec has joined #ipfs
Pharyngeal has joined #ipfs
domanic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jamescarlyle>
so to add the question in a different way, if I publish a HTML page with a link to a peer resource in the same directory (say an image, or another HTML page), then what is the right value for href? can we use the hash of the parent directory and then a relative / path to the peer?
<davidar>
rendar: you can tell people the hash before they actually get the file, no?
akhavr has quit [Read error: Connection reset by peer]
<jamescarlyle>
instead of asking qu, I should just experiment and look at the code :)
<davidar>
jamescarlyle: relative links are easiest, so you don't even need to reference the hash
<davidar>
so index.html has href="images/foo.jpeg"
akhavr has joined #ipfs
<jamescarlyle>
if they share a common parent directory (common parent hash)?
<rendar>
davidar: well yes, so for example an html page will contain the hash and not the foo.jpg filename?
<davidar>
jamescarlyle: yes, otherwise you need to either hardcode the hash into the html, or read it from a querystring
domanic has quit [Ping timeout: 246 seconds]
<rendar>
btw, one can map "foo.jpg"->hash in some way
<davidar>
rendar: if the files are all in the same directory, then you can just use the name
<jamescarlyle>
i guess my qu can be reformed: if we have a root directory with multiple levels of children directories, can we use the directory +"/" delimited paths to all files under that root, no matter how deep?
<davidar>
Severak: you need to add a text record to your domain name
<davidar>
for example
<davidar>
ipfs.io. 1800 IN TXT "dnslink=/ipfs/QmZfSNpHVzTNi9gezLcgq64Wbj1xhwi9wk4AxYyxMZgtCG"
<rendar>
davidar: another question, in the video the guy just do ls /ipfs and the directory seems empty, but when he can cd inside ipfs hashed directory name, how?
<davidar>
rendar: it's a fuse mount
<rendar>
i see
<rendar>
davidar: so there is a fuse module that understand ipfs and mounts it into the fs?
<davidar>
rendar: ipfs mount --help
<davidar>
tldr: yes :)
<syn5555>
somebody think about some build of chromium with ipfs support ? it was great if everybody can just type something like ipfs://ipfs.io
Severak has left #ipfs [#ipfs]
<davidar>
syn5555: there's already browser extensions for redirecting to the public gateway
<multivac>
there's already browser sextensions for redirecting to the public gateway
<davidar>
and having full ipfs nodes in the browser is planned
<davidar>
multivac, shut up
<multivac>
Okay davidar. I'll be back later
<syn5555>
=-)
jamescarlyle has quit [Remote host closed the connection]
<RX14>
but all the IPFS multihash i have seen have been sha2-256
<RX14>
start with Qm
<davidar>
yeah, that's the default currently
notduncansmith has joined #ipfs
<davidar>
but it can change
notduncansmith has quit [Read error: Connection reset by peer]
<rendar>
any particular reasons for sha256 instead of sha1 as the default one?
<davidar>
less likely to be broken
<rendar>
ok
<spikebike>
less likely to have a collision, likely usable for a longer period of time
<rendar>
well also a collision in sha1 would be *very* not probable, i guess all the files that have a human-understandable meaning, have all a different sha1 hashes
<davidar>
rendar: more probable under adversarial attacks though :)
<rendar>
yeah that is true
<davidar>
but yeah, i don't think accidental collisions are much of a concern in sha1 either in practice
<sickill>
anyone here from InterPlanetary Networks?
inconshr_ has quit [Remote host closed the connection]
G-Ray has joined #ipfs
<davidar>
sickill: lgierth should be around soon
<davidar>
why?
<sickill>
ok, just wanted to report that ipfs.io has a link to https://ipn.io which doesn't work (can't connect), http:// works though
<G-Ray>
Hi! As IPFS split files into multiples block, we could achieve the same goal as Maidsafe which is to distribute those tiny pieces between hundreds of nodes ?
<spikebike>
currently blocks are distributed only to those who ask for them (by downloading or pinning)
<davidar>
G-Ray: also see filecoin.io
<G-Ray>
spikebike: Yes I know, but this will never change I guess
<G-Ray>
IPFS will never force a user to store a blocks IIRC
<spikebike>
well it's a sensitive legal issue if you are hosting content for a 3rd party
<davidar>
what's the current status of filecoin, btw?
<xelra>
People could group together and all run a pinbot within their group to properly replicate and distribute.
<G-Ray>
spikebike: Even if it is only a tiny part of a file ?
<davidar>
cryptix: i'd like to see filecoin-lite where people can donate space without being paid though
<G-Ray>
spikebike: What could be the legal issues if everyone stores blocks of everyone ?
<RX14>
well, they can donate their earned filecoins to me :P
<G-Ray>
Data would be nowhere and everywhere at the same time
<spikebike>
g-ray: if the FBI generates a list of machiens used to download childporn I don't want to be on that list
<cryptix>
you can be held responsible for torrenting in germany even if you havned sent them 100% of it
<G-Ray>
And if data is encrypted ? How Maidsafe will be able to be legal then ?
<xelra>
As long as you store encrypted chunks, there shouldn't be any legal repercussions.
<RX14>
gobuilder.me seems to be building windows ipfs... someone should enable windows CI
<cryptix>
davidar: +1. maybe pinbot with signup for volunteers using a custom ipfs service for managment. might also be a nice example
jamescarlyle has joined #ipfs
<xelra>
That's actually how I want to use ipfs. In my own little sync universe, where I can distribute backups over WAN. Because most distributed filesystem really don't handle WAN well.
notduncansmith has joined #ipfs
<RX14>
anyone else getting "internalWebError: context deadline exceeded" for https://ipfs.io/docs/api/
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
xelra: I'm working on a p2p backup widget, potentially on top of IPFS
<spikebike>
RX14: ipfs got a fair bit more attention in the last few days, the servers aren't quite keeping up
<RX14>
it seems to be every page but that
<RX14>
every page works*
<spikebike>
RX14: they may need reset, yeah I get the same problem
<RX14>
if it was gateway load the whole of ipfs.io would be slow/down
<RX14>
but it's just that one page
<xelra>
For me it's just very important that I can close this backup system off from the global ipfs. Last time I checked that wasn't possible yet.
<RX14>
xelra, you can just encrypt it,
<spikebike>
xelra: no, that's not easy, although a firewall would work, but yeah encryption would be the obvious answer
<xelra>
Yeah but better both. Private and encrypted.
<spikebike>
dunno, I think it's safest to think the attacker always has a copy
<RX14>
I suppose it's not possible to make IPFS subnets...
<RX14>
private ones
<spikebike>
xelra: you could just run a private network and non-routeable IPs, enforced with a VPN
<xelra>
I thought it was a planned feature. Had a talk about that earlier this year
<spikebike>
there's an open issue, encryption is still on the todo
<RX14>
well, it's not possible right now...
Quiark has quit [Ping timeout: 260 seconds]
<RX14>
is it possible to get a list of blocks from a node?
<cryptix>
RX14, try /docs/api again ;)
<RX14>
ok, what was the issue?
<cryptix>
dunno - i republished the website
<cryptix>
the beauty of content hashing :P
<xelra>
spikebike: analogy: Even if you think your house key is uncopyable, you don't give it out to everyone to test themselves on it.
<cryptix>
i guess the last recent version was pinned properly
<jbenet>
we need to run it, so maybe lgierth (i would deploy but dont have time atm)
<rendar>
when ipfs has been first conceived? it seems its around since almost more than 1 year
ei-slackbot-ipfs has quit [Remote host closed the connection]
ei-slackbot-ipfs has joined #ipfs
<M-matthew>
also, if you think we should be considering IPFS as a realtime DAG for replicating matrix events around the place anytime soon, please yell, as we're about to do some major architectural/reimplementation work on the reference matrix homeserver implementation, and now would be a good time to do that thought experiment too :)
<davidar>
jbenet: k, i'll ping lgierth when he's around
<davidar>
Matthew: that would be cool, I know a few people have expressed an interest in getting a messaging service on ipfs, so it would be cool if matrix could fill that role
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
M-matthew: **yell**
Leer10 has quit [Ping timeout: 250 seconds]
<jbenet>
M-matthew but you want a nicer replication than just on the dht. what we need to do is implement our pubsub for you
border__ has joined #ipfs
rschulman_ has joined #ipfs
compleatang has joined #ipfs
<M-matthew>
jbenet: right - are there notes on how the pubsub & realtime replication would work?
<M-matthew>
i've got to go afk for a bit, but if anyone wanted to scribble feedback on that bug about how things could work, it would be very interesting :)
<mildred>
you should have received a GitHub notification for the PR anyway
dPow has quit [Read error: Connection reset by peer]
dPow has joined #ipfs
<RX14>
mildred, davidar, there are clearly some things that must be done centrally
G-Ray has quit [Ping timeout: 240 seconds]
hellertime has joined #ipfs
<mildred>
like what ? I think we can find ways to do these things a decentralized way.
<RX14>
in a much more complex way...
Encrypt has quit [Quit: Quitte]
G-Ray has joined #ipfs
inconshreveable has joined #ipfs
<rdbr>
mildred, truly decentralized comments by random users means you'll have to deal with illegal contents
<rdbr>
have you had a look at freenet? they have decentral forums based on a very similar architecture
jhulten has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
<mildred>
rdbr: yes, but as a site author (you own the IPNS handle) you can periodically look at and review all comments (manually or automatically with a spam detector) and publish a whitelist of the comments you approve of.
<RX14>
there will always be need for control and cenralisation in some systems, even as a legal requiremant
inconshreveable has quit [Remote host closed the connection]
<mildred>
Or you can say you accept every comment that anyone (including spam bots) make, and add a small javascript to your page to fetch all and every comment
<mildred>
Or the javascript could include an antispam
<mildred>
I think IPNS records could be used as the new central point if needed.
<RX14>
i believe that ipfs and the like will decentralise a lot of things, but not everything, and that HTTP or the like will contrinue to coexist
<rdbr>
freenet forums solve the problem with captchas iirc
<rdbr>
but you won't be able to decentralize everything, ever.
<RX14>
^
<rdbr>
ipfs works with ipv6 and via proxies, right?
jhulten has quit [Ping timeout: 255 seconds]
<mildred>
don't know, but I suppose it does work
inconshreveable has joined #ipfs
<svetter>
rdbr: a recent patch made it bind to :: by default, so there might not be a lot of peers as of now
<svetter>
but running a v6 only node to test it out was my plan
<svetter>
just haven't had time
<RX14>
how do you move your ipfs identity around multiple computers?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<RX14>
can i put it on my smartcard?
<davidar>
rdbr, RX14: not with that attitude :)
<rdbr>
I'd like to run a v6 only node, but I don't see the point right now..
<davidar>
i mean, decentralise everything :)
* davidar
afk
<RX14>
so each node has a private key, if I move that private key between nodes, I can be "me" on the ipfs network from another PC
<RX14>
now you just need to allow using the gpg agent so I can use my smartcard with it
jamescarlyle has joined #ipfs
<lgierth>
davidar: what is it?
<lgierth>
re: website
<davidar>
lgierth: https://ipn.io doesn't resolve, and a new version of pinbot needs to be deployed
<RX14>
what is ipn.io?
<multivac>
"What Is Ipnio" would be a nice name for a rock band.
<RX14>
shut up multivac
inconshreveable has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth>
RX14: you do it like this:
<lgierth>
multivac: shut up
<multivac>
Okay, lgierth - be back in a bit!
<lgierth>
:)
<lgierth>
davidar: looking what's up with ipn.io
<lgierth>
it's github pages
marianoguerra has quit [Quit: leaving]
<lgierth>
oh yeah that's why https doesn't work
<davidar>
ugh, bucket has some annoying defaults, i'll disable them soon
<lgierth>
it's not a good band name either
<lgierth>
my next band is gonna be "the barbie dreamhouse experience"
pinbot has quit [Remote host closed the connection]
pinbot has joined #ipfs
atomotic has joined #ipfs
G-Ray has quit [Ping timeout: 240 seconds]
<lgierth>
davidar: there's the new pinbot ^
G-Ray has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
domanic has quit [Ping timeout: 264 seconds]
marianoguerra has joined #ipfs
marianoguerra has joined #ipfs
akhavr has quit [Quit: akhavr]
akhavr has joined #ipfs
<davidar>
lgierth: cool, can you !befriend me? :)
<lgierth>
!befriend davidar
<lgierth>
mh
<lgierth>
!pin something
<lgierth>
meh
pinbot has quit [Remote host closed the connection]
<davidar>
hopefully @mikolalysenko can get some more info on how to actually do it :)
<davidar>
lgierth: yay, what was wrong?
<zignig>
I was thinking that having the planet.osm , generate an idex with file offsets and generate and add tiles ( raster and vetctor ) as processed.
<zignig>
2) merge the hashes together and republish.
<zignig>
2 needs some more ipfs infrastructure first.
<davidar>
zignig: i think that's kind of what mapsforge compiled maps do
<davidar>
or do you mean on-the-fly rendering?
G-Ray has joined #ipfs
<zignig>
have it all in ipfs , but render the tiles on the fly and add them back to ipfs.
<davidar>
zignig: yeah, I guess that would be possible, but pregenerated would be a lot simpler
<davidar>
or at least, processed enough that rendering can happen client-side without having to download the entire planet.osm
<zignig>
there are some quite good geojson browser render libs.
<davidar>
sure, assuming you've pregenerated the geojson :)
<zignig>
one of the things that I would like to do with ipfs is have a pipeline for processing files.
<davidar>
vector tiles are essentially raw osm data broken into tiles, then rendered clientside
<davidar>
zignig: like a map-reduce system?
<zignig>
OSM data is a bit wieldy pre-generating map/reduce key lists.
<davidar>
(the other kind of map :)
<ehd>
wow, that is a big file.
<zignig>
yeah , represent the indexes _in_ ipfs data.
<ehd>
(planet.osm)
<davidar>
so, what kind of pipeline?
<davidar>
ehd: yeah, but not that big in the grand scheme of things
<zignig>
1) get existing file.
<ehd>
davidar: hahaha, yes.
<ehd>
i kind of want to buy a disk and start downloading it though.
<davidar>
ehd: there's a few terabytes of stuff in the pipeline to publish for ipfs/archives
<zignig>
quad trees and GEO broken into /<folders> would be quite coom.
<zignig>
davidar: I have 16Gb of old shool thingiverse files that need processing and hosting.
<zignig>
*cool
<davidar>
zignig: i guess a directory of vector tiles is essentially a quad-tree-like structure?
<davidar>
well, not exactly, but similar idea
<zignig>
most of the time , you can store as point to point routing as well.
<davidar>
assuming you only go as deep as necessary
* zignig
remembers that it _is_ turtles all the way down.
<davidar>
zignig: i know a little bit about these things, but sounds like you have more experience with geo stuff, so very interested in your thoughts about how routing,etc could be done in ipfs :)
<davidar>
zignig: what is it that you do, btw? geostats?
* ehd
is downloading the PBF file
<zignig>
ehd , start with somthing small ( city or state )
<ehd>
aww, i was all like "i'm gonna index this 25 gigabyte monster very efficiently!"
<ehd>
but you're right :p
<zignig>
networking and sysadmin on remote linux clusters by day... but science + data
G-Ray has quit [Ping timeout: 250 seconds]
<zignig>
here is a tile cache i've written ,there is no ipfs interface but a big bolt.db stores tiles.
<pinbot>
now pinning /ipfs/QmZdWjYos4amoNGxHzFQXdeadmUdAeJBGUygkpQtMyMMfH
<zignig>
golang server for png tiles from openstreetmaps.
<ehd>
please raise your hand if you think people should stop abbreviating latitude and longitude
ianopolous2 has quit [Ping timeout: 246 seconds]
<ehd>
it's as if i live in a world where few things are abbreviated, but longitude shall forever be lon or lng, and i will always have bugs because of this
nessence has quit [Remote host closed the connection]
nessence has joined #ipfs
<davidar>
hi bielewelt!
<bielewelt>
... or to learn how to use ipfs add better.
<bielewelt>
Hi davidar!
<bielewelt>
What I did was `ipfs add -r -p -w oai_dc-dump-2015-07-06-firstmillionrecords.tar` although I was not quite sure what the `-w` option would do.
<flounders>
Haskell is definitely the way to go. The headaches get duller after you have spent a few months with it first.
<bielewelt>
Will I be able to add other files into the automatically created directory object?
<davidar>
bielewelt: -w just puts the file in a directory so it has a non-hashy name
<davidar>
no, ipfs objects are immutable
<bielewelt>
@flounders: That's what they say about the Brooks bicycle saddles, too. But it was not true in my case.
<davidar>
you'll need to add a new directory
<davidar>
(but ipfs will deduplicate files)
<bielewelt>
I see.
nessence has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
<davidar>
ipns provides mutable links to handle modifications, but it's not quite ready yet
notduncansmith has quit [Read error: Connection reset by peer]
<davidar>
and i think whyrusleeping is working on a git-like staging system
herch has joined #ipfs
<davidar>
or, just being able to copy files into the fuse mount
<davidar>
something like that
<herch>
Hi, how would a search engine like can work if we have all the sites working using IPFS and Ethereum ?
<herch>
just a question came in my mind, but could not find answer
<davidar>
bielewelt: anyway, this deal
<herch>
Search engine like Google
<bielewelt>
So the BASE dump is basically a flat directory containing 77000 files. Should I bundle these into a TAR file before releasing them?
<davidar>
bielewelt: for the moment, yes
<davidar>
ipfs currently chokes on directories with lots of files
rschulman_ has joined #ipfs
<davidar>
whyrusleeping has proposed a fix for that, but I haven't had a chance to test it yet :/
<bielewelt>
We had a good contact with them and helped them improve the OAI-PMH interface so we could harvest all of it.
<davidar>
bielewelt: could you put in a good word for us to try to get things moving a bit more? :)
<davidar>
i can loop you into the current thread if that would help
hyperbot has joined #ipfs
<davidar>
(on the fultext side of things, looks like getting metadata via base will probably be easier :)
jhulten has quit [Ping timeout: 244 seconds]
nessence has joined #ipfs
nessence has quit [Client Quit]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<bielewelt>
I'm still trying to find out why in the BASE team was in contact with CiteSeerX. Meanwhile, you can put me in the loop.
<bielewelt>
s/why/who
jamescarlyle has joined #ipfs
<rendar>
i wish to understand ipfs internals better, for example, where it stores files metadata?
erikj has joined #ipfs
<cryptix>
rendar: like mtime, acls, etc? currently it doesnt to preserve content addressability
voxelot has joined #ipfs
<rendar>
cryptix: hmm, what you mean with addressability?
<zignig>
cryptix: nor should it ever , the file blob is just that the blob.
<RX14>
adressable by content
<RX14>
not content + meta
<cryptix>
trying to find the issue...
<zignig>
metadata can evolve, I vote for free form binary as see what comes out in the wash.
<davidar>
bielewelt: done
<zignig>
don't try to predefine the metadata structure , let it grow.
<cryptix>
if you create the same dir structure and add it, you currently get the same hash because only data and dir structure is preserved
<cryptix>
if you add mtime, uid etc that breaks
<zignig>
cryptix: did you get mapping to compile ?
<cryptix>
zignig: sorry - preoccupied
<RX14>
what are the chunkers for ipfa dd called again?
<cryptix>
importers
<cryptix>
dont know if we switched to rabin by default yet
<RX14>
and, when you ipfs add it copies the data to ~/.ipfs right?
<RX14>
so i need 2x data space on my hdd?
<davidar>
cryptix: i vote for adding optional support for mtime in certain scenarios (e.g. wget --mirror sets mtime to the authoritative date returned by the server)
<cryptix>
RX14: or remove the old copy - yes. a shallow/torrent like approach was purposed but not yet worked on
<RX14>
what about btrfs
<multivac>
"What About Btrfs" would be a nice name for a rock band.
<RX14>
does it copy --reflink?
<RX14>
multivac: shut up
<multivac>
Okay, RX14 - be back in a bit!
<davidar>
oops, fixing multivac now
<cryptix>
i dont think we treat any fs differently
<RX14>
the issue is that if you link them, when you mofify the file again the data changes in the store
<RX14>
and then it's not valid right?
<cryptix>
rendar: large files are split into chunks. you cant link with an offset into the file i was told
multivac has quit [Remote host closed the connection]
<RX14>
hmmn, that's an issue
<cryptix>
RX14, yup - you need to rehash on read if you want to be a nice peer
<rendar>
davidar: thanks
<RX14>
'tis a pain
<bielewelt>
davidar: same over here. cu!
<RX14>
my home partition only has 9GB free
<rendar>
wait, why large files are split into chunks? i can't get this
<zignig>
gnite all. ;)
pfraze has joined #ipfs
<RX14>
which is absolutely plenty, would last me a year if i didn't store stuff on ipfs
akhavr has quit [Ping timeout: 264 seconds]
<ehd>
rendar: so you can load file parts from different peers?
wasabiiii has joined #ipfs
<rendar>
ehd: yep, that would be useful, but .. how you can do that? i mean, if the client needs the whole file, it will ask for the first chunk..and others chunks? must be linked each other someway
<davidar>
bielewelt: ok, thanks for all your help with this, chat later :)
<RX14>
hmmn, i don't think btrfs duperemove would really work with chunked content
<davidar>
night zignig
<RX14>
rendar, large files are just a node with links to the chunks
<rendar>
hmmm ok
<rendar>
RX14: but each chunk will have a different hash, right?
bielewelt has quit [Quit: Leaving.]
akhavr has joined #ipfs
<blame>
hello world! I'm spending the next 3 hours focused on ipfs related-development.
<RX14>
rendar, yes
<RX14>
so the hash you get is simply a node that links to the sub hashes
<rendar>
i see
<rendar>
a node in the merkel tree
pfraze has quit [Remote host closed the connection]
<rendar>
or, hash tree, or how its called
<rendar>
:)
<rendar>
right?
multivac has joined #ipfs
<RX14>
yes
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rendar>
RX14 so each node on that tree represent an entire file or a chunk of a file
<davidar>
ok, multivac should (hopefully) be less annoying now
<rendar>
RX14: sorry, i meant each leaf, not each node of that tree
atomotic has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<wasabiiii>
I was thinking about the potential to build a block chain on top of IPFS. How could this actually be accomplished? You'd need some way to query for objects related to the block chain.
pfraze has joined #ipfs
thomasreggi has joined #ipfs
pfraze has quit [Remote host closed the connection]
<ehd>
wasabiiii: which aspect of a blockchain? you can already generate a long chain of DAG nodes. maybe some of the bitcoin crypto/work proof stuff?
<blame>
wasabiiii: sure. IPFS is a block store
voxelot has quit []
<wasabiiii>
Yeah, storing hte blocks is fine. Finding them however, different issue.
<rendar>
wasabiiii: you can query ipfs
<wasabiiii>
Did not know that. Though you could just query by object hash.
<blame>
wasabiiii: Just keep a reference to the head node, do proof of whatever to generate new blocks. You are on your own for disseminating the new block. Honestly the "gossip" random-k connected network style is better for that problem
<RX14>
well, you could create an ipns of the chain head and then follow it down
<wasabiiii>
But then who would be responsible for updating the ipns. ;)
<RX14>
you :P
<blame>
another option I've not considiered
<wasabiiii>
That seems like a poor way to build a block chain.
<blame>
we could store the bitcoin blockchain on ipfs.
<RX14>
wasabiiii, yeah, it is a bit silly
<wasabiiii>
If one person is responsible for updating a top level node, theres zero actual security.
<RX14>
yup
<blame>
wasabiiii: well depends on how much you trust that guy
<wasabiiii>
What ya need is a way to be able to tag some objects with something, like "this is a MagicCoin block", and then query peers for that.
<blame>
for example, verisign does offer __some__ security
pfraze has joined #ipfs
<rendar>
wasabiiii: your original question was about asking ipns if the hash you provide is part of the repository?
<wasabiiii>
Don't think so.
<rendar>
wasabiiii: so what you mean by "finding them" ?
<RX14>
how do you find which blocks are part of the blockchain
<RX14>
you don't know their hashes
<RX14>
and obviously you cannot calculate the hashes without the data
<rendar>
exactly..
<rendar>
this is why i can't get the wasabiiii question
border has joined #ipfs
<RX14>
urr why?
<rendar>
RX14, because as you said, without the file you can't have the hash, so you can't find the hash
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<RX14>
and wasabiiii is asking for a solution to this problem
<rendar>
sorry, misread
<blame>
RX14: wasabiiii: Don't tell. You shoudl be able to sanity check any blocks out there. Just try all the blocks that hit ipfs
<RX14>
so can you get a broadcast of new blocks?
<RX14>
that would solve the problem
<RX14>
just sit and wait for a block to happen
<rendar>
i want to understand one key point: consider you have an hash, searching would mean traversing the hash-tree with that hash as the hash to find?
Encrypt has quit [Quit: Quitte]
<wasabiiii>
That might work, a broadcast for new blocks. Then you can always use the new ones to find the previous ones.
<wasabiiii>
It's also kinda silly though. :)
<blame>
rendar: are you talking about the distributed hash table? or a Merkle tree?
<rendar>
Blame: well, the Merkle tree is the data structure used locally, while the DHT is used to "map" remote nodes, right?
<blame>
wasabiiii: the better solution is to use a randomly connected p2p network to "gossip" head nodes, then host an achive of the blockchain
border__ has quit [Ping timeout: 240 seconds]
<blame>
wasabiiii: you could either use the same method ipfs uses to hash blocks, or you could put an "archive refference" in the head block
FunkyELF has joined #ipfs
<blame>
rendar: merkledag is used to organize the block that makeup a file
<blame>
rendar: a DHT is a system that allows for decentralized key->value lookup
<rendar>
Blame: yeah, and every leaf is a file/chunk right?
flounders has quit [Read error: Connection reset by peer]
<blame>
so, given the root key for a file. You can find the root of that file's merkledag on another machine.
<blame>
that points to all the non-leaf merkledag nodes (may take multiple iterations) and blocks
<blame>
the dht stores a list of "providers" for a block at it's hash in the dht
<rendar>
Blame: yep, i knew that, but when ipfs receives a query for file with hash QmABCX123 for example, it has to traverse the Merkle tree, or just copying the file whose file name is "QmABCX123" ?
solarisfire has joined #ipfs
<blame>
then you go negotitiate with those guys for a copy.
<blame>
It has to traverse
<blame>
HASH VALUE -> Merkledag -> block hashes -> guys with those blocks
hjoest has quit [Ping timeout: 246 seconds]
<rendar>
Blame: ok
<RX14>
metadata tagging on blocks would be great...
atomotic has joined #ipfs
jhulten has joined #ipfs
<rendar>
Blame: could it skip to traverse merkledag? i mean, if i have to search the file with hash 'QmABCX123' i simply search for the file, e.g. '/path/QmABCX123', if the files doesn't exist i query the DHT to see if there are copies in remote machines..
<blame>
There is no file /path/hash
<rendar>
Blame: hmmm ok, so how data is stored?
<blame>
when you call that url, it either finds on the network or cached: the merledag object for that file, then the chunks, then assembles then into a buffer and sends it to you
<blame>
the only thing that changes is where the markledag and chunks are stored
<blame>
after you request it the first time, they are cached locally
<blame>
so you don't have to go to the network for them again.
<RX14>
is there a maximum cache size for ipfs?
notduncansmith has joined #ipfs
<blame>
how big is your HD?
<blame>
that,
<RX14>
...
notduncansmith has quit [Read error: Connection reset by peer]
<RX14>
that's not very nice :p
<rendar>
Blame: hmmm, ok, but when you have some files/chunks chached or stored in your local node, where the data is? i mean, git creates those directories in A6/BCDD56 ...etc... how ipfs do this thing?
<blame>
you can free up space with ipfs gc
jhulten has quit [Ping timeout: 252 seconds]
<blame>
rendar: aha, yes very similar to git locally
<blame>
but my point is the file you think of as /path/hashid stored teh root of the merkledag NOT the whole file.
<rendar>
Blame: so /path/hash exist somewhere...as i said
<rendar>
hmmm
<blame>
the file is stored in chunks in the same location
<RX14>
Blame, but if we want ipfs to be used by non-technical people, they won't want to run ipfs gc
<blame>
agreed. it is not a priority right now
<RX14>
yup
border__ has joined #ipfs
<rendar>
Blame: why /path/hashid would store the root of the markledag, and not the whole file?
<blame>
the long term solution is a config value of "how much un-pinned data should be cached?"
<RX14>
yes
<RX14>
and proper UI of what is pinned ofc
<blame>
rendar: becuase the contents of /path/hash1 should, when hashed, match hash1
<rendar>
Blame: hmm, ok i got it
<blame>
the top level hash is NOT a hash of the file, rather it is the hash of the root of the merkle tree.
larsks has joined #ipfs
<rendar>
Blame: btw, all files goes in .ipfs like .git ?
<RX14>
yes
<larsks>
Is there a specification for ipfs urls? E.g., ipfs:<hash> or something?
<RX14>
well it would be ipfs://<hash>/xxx
<pfraze>
did I hear the ipfs.js implementation is in progress?
<larsks>
RX14: Sure. I was just asking if there was an existing standard...
<blame>
right now most folks use thier own gateway or ipfs.ip/ipfs/hash
<blame>
*ipfs.io/ipfs/hash
<larsks>
Blame: that looks like the opposite of a URL format. I am looking at adding ipfs support to a tool that expects a url, so I need a schema:<data> format of some sort.
<larsks>
(The tool can use custom handlers based on the url schema)
<blame>
jbenet: has been against a ifps:// sort of thing
<RX14>
why not?
<blame>
we are pushing to get away from that, for something more composible
<RX14>
it makes sense
<larsks>
Blame: yeah, why? That is pretty typical.
<larsks>
Althought I would suggest simplye ipfs:<hash>, no "//" because there is no network address.
<blame>
/ipfs/addr is what we want
<larsks>
Blame: that's not in url format.
<blame>
no, it is in a path format
<larsks>
So not useful for tools that need to extract a scheme to differenate between, say, http vs ipfs vs ftp...
<RX14>
some things just want URLs, stop reinventing the wheel
<rendar>
i don't see the point here
<rendar>
why ditching the world wide used url format?
<blame>
becuase of what you could do with something like this this -> /ipv6/1234:1224:1234:1234/ipfs/hash
<larsks>
Blame: why would you do that? That is, why would you specify an address for an ipfs object? I thought the idea was that the system was content addressable, so only the hash should be necessary.
<rendar>
Blame: hmm, what that would express?
<blame>
if I did not have a ipfs gateway locally, it would say go here and then use ipfs. The idea is not to make this special just of ipfs but to push away from "protocol://vaugeinfo"
<larsks>
Blame: that seems like a terrible idea.
<RX14>
just use ipfs:<hash>
<blame>
I'm not the one to be explaining this
<larsks>
RX14: that is my inclination, as well.
<lgierth>
one problem with URLs is that they carry a lot of out-of-band knowledge, e.g. if i get a http URL, i have no idea whatsoever what i'm gonna do with it. we want addresses that are completely self-describing, e.g. http://example.net/foo => /ip4/1.2.3.4/tcp/80/http/foo
<RX14>
then paths on the end if you need
* blame
calls jbenet and whyrusleeping for help
<lgierth>
if you *have* to use a URL, go with ipfs:hash and ipns:hash
* blame
just has to wait 6 hours for them to wake up
<lgierth>
or ipfs:/ipfs/hash and ipfs:/ipns/hash
<RX14>
so then we use multiaddr:/ipv4/xxx/tcp/xx/ipfs/cxxx
<RX14>
i don't see the point of encoding a host in there
<lgierth>
yeah something like that
<pfraze>
I think, inevitably, yall will need to use a scheme identifier up front
<RX14>
it makes it completely useless to serve statically
<pfraze>
but the multiaddr scheme is fine
<wasabiiii>
I think an address format that allowed nesting would be neat.
<lgierth>
multiaddr does
<RX14>
because then you are encoding the port to get it from, when that is data that the client should set
<RX14>
not whoeser is providing the URL
<lgierth>
you can compose addresses with all kinds of protocols
<lgierth>
RX14: why?
<lgierth>
that's out-of-band information in that case
<pfraze>
yall should use 'mad:', short for multi-address
<lgierth>
pfraze: :)
<RX14>
lgierth, can you leave the ip adddreddes out?
<RX14>
like
<RX14>
mad:/ipfs/xxx
<wasabiiii>
I think there's an argument for leaving the port off, if you're using protocols where port resolution is to be done with SRV records, for instance.
<RX14>
or do you need the stupid host stuff
<vx>
Guys, have you thought about a non techy way to have simple adresses to websites?
<lgierth>
RX14: if you only wanna represent a hash, go with ipfs:hash or ipfs:/ipfs/hash
<RX14>
wasabiiii, providing an IP address in a specifier for a distributed protocol is pretty stupid
notduncansmith has joined #ipfs
<RX14>
lgierth, yes that was the idea...
<vx>
Because i think we absolutely need that
notduncansmith has quit [Read error: Connection reset by peer]
<pfraze>
RX14: it's not *required* in all addresses
<wasabiiii>
RX, I don't think so, always. The multiaddresses shouldn't really appear to users anyways.
<blame>
vx: the working solution is to host a gateway and use ipns
<vx>
Something easily usable and wothout a hash in the url
<solarisfire>
Just wondering if ipfs has any inbuilt protection against it.
<lgierth>
it's more likely that the sha256 hash function is broken, and in that case we can relatively easily migrate to another (see github.com/jbenet/multihash)
jamescarlyle has quit [Remote host closed the connection]
<pjz>
solarisfire: sure, but md5 is more known-broken than other hash functions
<solarisfire>
Yeah, but people who want to be malicious can try and create multiple files with the same hashes :-)
<solarisfire>
Because humans are douchebags...
<solarisfire>
It'll happen one day ;-)
<pjz>
solarisfire: and ipfs's solution to that is to support multiple hash functions
<solarisfire>
Yeah, that's a pretty good workaround...
<lgierth>
solarisfire: usually hash functions don't break from one day to the other
<solarisfire>
You're never going to collide them all at once.
<lgierth>
they get weakened over time
notduncansmith has joined #ipfs
<pjz>
solarisfire: well, again, theoretically possible, but practically very very unlikely
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth>
i.e. the effort required to create collisions decreases, slightly, again and again, and we can notice and migrate
pfraze has joined #ipfs
<pjz>
solarisfire: if it becomes a problem with md5 then the answer is to switch to sha256 or something else
<richardlitt>
Where do I ask for help if I can't install IPFS? Her?
<richardlitt>
*Here?
<lgierth>
yeah
<lgierth>
no output at all? not even an exit status?
<solarisfire>
What platform? What version of go? :D
<pjz>
solarisfire: you can see a list of commands by runing 'ipfs --help' I think
<lgierth>
richardlitt: yes please
<lgierth>
can you edit? if not, that'd be jbenet again
<richardlitt>
Cool. jbenet, I need to be added there to, can't edit it at the moment.
<lgierth>
best to write a ticket or email with everything you need access to
<lgierth>
irc gets messy
<richardlitt>
lgierth: done.
voxelot has joined #ipfs
voxelot has joined #ipfs
voxelot has quit [Client Quit]
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
rschulman_ has quit [Quit: rschulman_]
<richardlitt>
Fun times!
marianoguerra has quit [Ping timeout: 252 seconds]
jamescarlyle has quit [Remote host closed the connection]
chriscool has quit [Ping timeout: 252 seconds]
jhulten has joined #ipfs
jhulten has quit [Ping timeout: 244 seconds]
chiui_ has joined #ipfs
flyingkiwi has joined #ipfs
chiui has quit [Ping timeout: 256 seconds]
mikee has joined #ipfs
jamescarlyle has joined #ipfs
* daviddias
is where! :)
<daviddias>
lol, I meant
* daviddias
is here! :)
<whyrusleeping>
daviddias: :D
<daviddias>
whyrusleeping: o/ :D
<FunkyELF>
is it possible to have different versions of a file and have a link that points to the latest?... like the head of a Git branch moves over time?
<daviddias>
richardlitt: yes, I definitely want, thank you for pinging me, got distracted by all WebRTC WG and Container Camp madness
<blame>
FunkyELF: ipns is intended as a solution to that problem
<blame>
but is still WIP
rongladney has joined #ipfs
<whyrusleeping>
chanserv...
<daviddias>
richardlitt: I have a simple 'roadmap' here https://github.com/ipfs/node-ipfs, there isn't exactly PR to review in the state of PR, since I'm mostly the one maintaining the code, and that is where your help comes in handy
<daviddias>
start from the swarm tab, because that is the foundation that enables other to work
<FunkyELF>
Blame: yeah... I guess how would you prevent someone else from creating a new version of your file/directory, etc
<blame>
ipns uses public keys to address a pointer in ipfs
<blame>
so you have to be able to sign the ipns record with your private key to update it
<blame>
we also use dns right now
<daviddias>
richardlitt: there is also a lot of WIP documentation about libp2p https://github.com/ipfs/specs/pull/19/files which can be very valuable for you do 'dial in' in everything that libp2p does
<blame>
try ipfs.io/ipns/ipfs.io
<daviddias>
richardlitt: I'm available to explain things in more detail, although with availability today and tomorrow (due to CC)
pfraze has quit [Remote host closed the connection]
fiatjaf has left #ipfs ["undefined"]
jamescarlyle has quit [Remote host closed the connection]
<FunkyELF>
why does everything seem to start with Qm ?
<whyrusleeping>
FunkyELF: thats the multihash tag
<whyrusleeping>
we tag all our hashes with their function and length
<FunkyELF>
so you can change the hash later if it is found to be vulnerable?
<FunkyELF>
*hash function
<whyrusleeping>
yep!
<rendar>
hmm, why is enforced the use of a cryptohash, instead of a normal hash like MurMur? to avoid those attacks which generates millions of keys with the same exact hash value?
<blame>
rendar: yes. Bad things happen when hashes collide in a content addressed system
<FunkyELF>
what is the big buck bunny address? I wanna try playing it in VLC
<blame>
FunkyELF: no idea, try re-adding it to find the address
<FunkyELF>
Blame: :(
<ipfsbot>
[node-ipfs] diasdavid pushed 1 new commit to master: http://git.io/vZBC5
<ipfsbot>
node-ipfs/master 513f826 David Dias: Update DRS list
<FunkyELF>
I think I see it in some video... I guess I'll pause the video and type in that hash
<blame>
gjve me a sec
<blame>
I'll upload it for you
<rendar>
Blame: that's right
<blame>
FunkyELF: im torenting it
<rendar>
Blame: do you know the SIP hash? it seems awesome, not a crypto hash, but neither as easy as others to collide
<multivac>
No, but if you hum a few bars I can fake it
<whyrusleeping>
multivac: youre on two strikes now
<flyingkiwi>
Hi there. What are my possibilities to contribute *something* to the network? Is running a node (like bitcoind) "enough" or can I/my servers do more to help the network? And is running a node even somehow helpful?
<wasabiiii>
There's no need to "support the network". Unless you want to mirror data or something.
<RX14>
the node won't do anything by itself
<wasabiiii>
But yeah, the node does nothing on it's own.
<whyrusleeping>
the node by itself helps out the dht
<whyrusleeping>
keeping long lived nodes around aids in faster resolution times
<RX14>
you need to get data sepcifically before it seeds it
<RX14>
but yeah, dht
<whyrusleeping>
the other thing it does, is help us see things we need to work on as the network scales
FunkyELF has quit [Ping timeout: 244 seconds]
<blame>
whyrusleeping: Would you be willing to a "community donated gateways" feature to pinbot?
<whyrusleeping>
Blame: elaborate?
<RX14>
i would love to donate a gateway
<RX14>
i have a gigabit dedi doing basically nothing all day
<blame>
essentially, A lot of people just want to be charitable, and host files for anybody who asks within reason. Setting up a gateway is easy, putting shomethign in that gateway's cache is as easy as reqqesting it
pandro has quit [Remote host closed the connection]
<RX14>
when you run a gateway, does it evice when your HDD is full?
<RX14>
evict*
<blame>
so a service or bot to literally just call a wget/curl on a list for community donated gateways for a given hash seems ok
<blame>
with the understanding that members could drop off or gc any time they feel like it
<RX14>
so you could do !pin and it would give you a link to a random gateway
<RX14>
i just want to be charitable damnit
<blame>
This is what gets me every time somebody posts about "needing incentives". Maybe I am just a socialist at heart, but "providing a public service" is sufficient incentive for many
legobanana has joined #ipfs
<RX14>
i want to be loved :P
<RX14>
that's all it is for some people
jhulten has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping created feat/tar-fmt (+1 new commit): http://git.io/vZBz5
<ipfsbot>
go-ipfs/feat/tar-fmt 0de8947 Jeromy: first pass at a tar importer...
<voxelot>
Blame: RX14: agreed, I have a VPS just running ipfs and cjdns if anyone every needs something pinned
pinbot has quit [Remote host closed the connection]
<voxelot>
basically a guy got pissed at CA's haha, lgierth is also really knowledgeable on it
hellertime has quit [Quit: Leaving.]
<rdbr>
what a coincidence, I've set up a few cjdns nodes yesterday
<FunkyELF>
are there legal issues with running ipfs locally? For example if someone puts something illegal into ipfs would I start seeding it without ever having requested it myself?
<lgierth>
FunkyELF: no you only seed what you requested
<voxelot>
FunkyELF: you only get files if you request them
<FunkyELF>
so a bit different than ToR... shouldn't be any real risks of running it
<lgierth>
yes something completely different
notduncansmith has joined #ipfs
<mmuller>
there's a caveat with bitswap, right? nodes opting into it will act as intermediaries for data they're not necessarily interested in.
notduncansmith has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
<FunkyELF>
is it possible to get the address of a local file without adding it?... for the purposes of seeing if its already there?
<lgierth>
FunkyELF: ipfs add -n
<rdbr>
have you considered that freebsd already has a tool called ipfs?
jhulten has joined #ipfs
border has quit [Remote host closed the connection]
legobanana has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
mquandalle has joined #ipfs
xenkey has joined #ipfs
legobanana has joined #ipfs
<xenkey>
Hi
legobanana has quit [Client Quit]
<voxelot>
xenkey: O/
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<RX14>
so is this cjdns, everyone shares the load of the network?
<voxelot>
in theory, but i think in practice nodes have gotten a bit clustered
<RX14>
so, people only route through you if they connect to you?
<rdbr>
nope, all traffic could get routed through you
<voxelot>
well you set your entry point and then the router takes over
<rdbr>
but usually that doesn't happen because it optimizes its routes so paths are short
<rdbr>
also keep in mind that it is a closed network, so not like tor but more like i2p. also keep in mind that your peers know who you are and there's no real content on that network
<rdbr>
ansuz, hi!
<RX14>
so nobody uses it anyways?
<voxelot>
hyperboria is fairly active
<rdbr>
well.. there are about a hundred, maybe two hundred users?
<voxelot>
cjdns encourages you to set up your own local mesh
<rdbr>
but I haven't found anything interesting besides a few irc servers
<blame>
Cjdns is honestly better than a vpn for many tasks
<rdbr>
for traffic between your own nodes, yes
<voxelot>
like in the case of Cisco blocking TOR at the ISP in china, cjdns could route around that right
<rdbr>
cisco finally managed to block tor?
<voxelot>
yeah but there's always a way around
legobanana has joined #ipfs
<voxelot>
they can only identify packets as tor packets do to how they look so obvious
Eudaimonstro has joined #ipfs
<rdbr>
are cjdns packets identifiable?
<rdbr>
that is, identifiable as such
<voxelot>
not sure, lgierth? think the design routes you around the isp and opens direct connections so it doesnt matter
<blame>
They look like ssl/tls
<rdbr>
tls over udp?
<rdbr>
tls has some fairly recognizable handshakes iirc
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
sooooo many issues reported since kyles post
<whyrusleeping>
this is great
patcon has joined #ipfs
<VegBerg>
It does start again after removing .ipfs and doing the init again
<krl>
richardlitt: readme should be more helpful now
<richardlitt>
krl: thanks
hellertime has quit [Quit: Leaving.]
hellertime has joined #ipfs
legobanana has joined #ipfs
<edrex>
quick question: when I `ipfs add` something, I'm making a copy of the bytes into my local store. Which is suboptimal for storage if I need to keep the original files around for some reason. Has there been any work on serving file bytes in situ? Obviously modifications to the files would make them invalid sources for the bytes, which I guess would have to forcibly unpin.
<edrex>
i guess another approach would be to fuse mount the added directory, but that has different tradeoffs.
<edrex>
Probably it's too messy/unreliable to use the original files for storage. Just curious if there had been work in that direction.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ianopolous2 has joined #ipfs
ianopolous has quit [Read error: Connection reset by peer]
marianoguerra has joined #ipfs
marianoguerra has joined #ipfs
rschulman_ has quit [Quit: rschulman_]
legobanana has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
clever has quit [Ping timeout: 246 seconds]
pfraze has joined #ipfs
clever has joined #ipfs
rschulman_ has joined #ipfs
nastyHelpDeskBit has quit [Ping timeout: 246 seconds]
patcon has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
nastyHelpDeskBit has joined #ipfs
chiui_ has quit [Ping timeout: 250 seconds]
simonv3 has joined #ipfs
Eudaimonstro has quit [Remote host closed the connection]
rongladney has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<RX14>
i'm rewriting the DOckerfile to build from source
notduncansmith has joined #ipfs
<whyrusleeping>
RX14: we used to do that, issue was that it made the resulting docker image pretty big
notduncansmith has quit [Read error: Connection reset by peer]
<RX14>
you can remove parts...
<RX14>
i'll try and make it not as big
<patrickod>
isn't there a technique of using two separate Dockerfiles for building go projects?
<patrickod>
one to build and the second takes the built project and runs it in a smaller base container?
<RX14>
as long as you don't commit stuff, it's fine
jjanzic has joined #ipfs
<patrickod>
"commit" ?
<RX14>
as long as you uninstall your build tools before the one step ends
<patrickod>
oh
<RX14>
it's the same size
orzo has quit [Ping timeout: 246 seconds]
<RX14>
i'll try, see what the size differences are between the two containers
hyperbot has quit [Remote host closed the connection]
hyperbot has joined #ipfs
orzo has joined #ipfs
<hjoest>
RX14: I'm working on my Ruby client and trying to understand the response format of the get command, e.g. POST /api/v0/get?arg=QmedYJNEKn656faSHaMv5UFVkgfSzwYf9u4zsYoXqgvnch
<RX14>
i honestly feel like the HTTP api is a really thin wrapper around a commandline app
<RX14>
the HTTP api doesn't feel like one TBH
taneli has quit [Ping timeout: 240 seconds]
<hjoest>
RX14: Looks like get returns blocks with zero byte padding. Any hint where I can find information about it?
<whyrusleeping>
RX14: the cli is a wrapper around the http api
<RX14>
whyrusleeping, but the HTTP api feels like it was converted to a HTTP API from a commandline app in a hurry
<RX14>
which i suspect it was
<whyrusleeping>
well, it wasnt
captainbland has joined #ipfs
<whyrusleeping>
we designed the http api over the course of a month or so, and built the cli on top
<RX14>
why does it use ?arg=xxx
<RX14>
it just feels wrong
<whyrusleeping>
what would you use instead?
<RX14>
a JSON body
<RX14>
like most APIs
<whyrusleeping>
hrm, lots of apis do things differently. Ive seen lots that use json bodies, and lots that use the query strings
<whyrusleeping>
it made the most sense to us to put things in a query string
<whyrusleeping>
easier to use it via curl and friends
<RX14>
but the query strings aren't even names, it's all ?arg=
<whyrusleeping>
having to post a json blob is clunky
<RX14>
which feels like you are just trying to emulate the commandline
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
<hjoest>
whyrusleeping: Thanks. Yes, it returns a TAR stream. Actually, I saw this "ustar" in the stream but didn't recognize it as the magic number.
<hjoest>
whyrusleeping: Might be helpful to set the response header Content-Type: application/x-tar
<whyrusleeping>
hjoest: that would be a good idea. wanna file an issue for us?
NHAS has joined #ipfs
<blame>
whyrusleeping: when I send a request to a gateway, and it starts to assemble the resulting file, and then I abruptly terminate the connection, does it get and cache the entire file I asked for?
<whyrusleeping>
Blame: yeah, i beleive that for now, it will continue out the request
hjoest has quit [Read error: Connection reset by peer]
rsynnest has joined #ipfs
<whyrusleeping>
although that is undefined behaviour
<blame>
that is what I am doing with pinbox, becuase pinbox does not actually want the file, it just wants to gateway to cache it
<blame>
nobody has made a 1tb ipfs file just abuse gateways with have they?
<whyrusleeping>
Blame: not that we've seen
<whyrusleeping>
said person would also have to provide 1tb of upload bandwidth too
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has joined #ipfs
<rsynnest>
Hi all, I was on IRC yesterday asking about how ipfs might be able to handle dynamic sites, and didn't get much of an answer. I opened a question on the FAQ and would love to know what you guys think. I don't know too much about cryptography/security/networking so I'm sure there are some flaws in my design, and would love to know what those flaws are. If you
hyperbot has quit [Remote host closed the connection]
hyperbot has joined #ipfs
dstokes has joined #ipfs
devbug has joined #ipfs
_p4bl0 has joined #ipfs
NHAS has quit [Read error: Connection reset by peer]
hyperbot has quit [Remote host closed the connection]
hyperbot has joined #ipfs
<whyrusleeping>
rsynnest: hey, i'll take a look at it!
<xelra>
rsynnest: Isn't that what decerver does with the thelonius blockchain?
<rsynnest>
xelra: I have no idea
notduncansmith has joined #ipfs
<whyrusleeping>
rsynnest: yeah, i would take a look at what eris industries is doing, i havent looked into their stack too deeply, but i know they are building decentralized webservices utilizing ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rsynnest>
ok yeah someone mentioned ethereum yesterday
<rsynnest>
will look into it
<rsynnest>
thanks guys
<whyrusleeping>
yeah, ethereum is pretty cool too.
<xelra>
The user data is saved in the blockchain. Which acts like a datastore.
<whyrusleeping>
note: ethereum is separate from eris
<voxelot>
also note: you can't store private data in a blockchain unless you can get zero knowledge proof of knowledge to work