mrcstvn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
kerozene has quit [Ping timeout: 268 seconds]
kerozene has joined #ipfs
nessence has joined #ipfs
sonatagreen has quit [Ping timeout: 240 seconds]
cryptotec has joined #ipfs
cryptotec has quit [Ping timeout: 260 seconds]
<kpcyrd>
20:17 < sheer_pin> my friends are: kpyrcd lgierth Igel
<kpcyrd>
arg, accidentally pasted
grahamperrin has joined #ipfs
demize has quit [Ping timeout: 244 seconds]
jo_mo has quit [Quit: jo_mo]
borgtu has quit [Ping timeout: 260 seconds]
borgtu has joined #ipfs
demize has joined #ipfs
IAmNotDorian has joined #ipfs
<IAmNotDorian>
whois aar-
IAmNotDorian has left #ipfs ["Leaving"]
Guest73396 has quit [Ping timeout: 268 seconds]
mungojelly has quit [Ping timeout: 250 seconds]
r04r is now known as zz_r04r
pfraze has joined #ipfs
pfraze has quit [Ping timeout: 255 seconds]
sharky has quit [Ping timeout: 244 seconds]
sharky has joined #ipfs
cryptotec has joined #ipfs
cryptotec has quit [Ping timeout: 260 seconds]
stoopkid_ has joined #ipfs
<stoopkid_>
hello, i'm trying to make a decentralized wikipedia using IPFS, here's the very basic outline for finding/providing content: http://pastebin.com/J3tNtmJg
<stoopkid_>
i was wondering if anybody might have some thoughts about standardizing things further so that content can be integrated between users more easily
amstocker has joined #ipfs
amade has joined #ipfs
mvr_ has joined #ipfs
kuroshi has quit [Quit: reboot]
kuroshi has joined #ipfs
cryptotec has joined #ipfs
cryptotec has quit [Ping timeout: 260 seconds]
captain_morgan has quit [Ping timeout: 240 seconds]
captain_morgan has quit [Ping timeout: 240 seconds]
Guest73396 has joined #ipfs
stoopkid_ has quit [Ping timeout: 246 seconds]
ygrek has quit [Ping timeout: 268 seconds]
cvik has quit [Ping timeout: 250 seconds]
amstocker has quit [Ping timeout: 265 seconds]
amstocker has joined #ipfs
kerozene has joined #ipfs
amstocker has quit [Ping timeout: 244 seconds]
Wolf480pl has quit [Quit: ZNC disconnected]
Wolf480pl has joined #ipfs
Guest73396_a has joined #ipfs
Guest73396 has quit [Ping timeout: 256 seconds]
Guest73396_a has quit [Ping timeout: 255 seconds]
<alterego>
I'm getting an error form "ipfs mount" "Error: failed to find any peer in table" though swarm peers displays the other local system I have.
<alterego>
I've removed all the default bootstrap addresses from both devices. So they just communicate with each other. The mount works fine from machine A, but not on Machine B .. :/
<alterego>
Oh, now it works, weird ...
cryptotec has joined #ipfs
slothbag has joined #ipfs
mrcstvn has joined #ipfs
OutBackDingo has quit [Quit: Leaving]
dignifiedquire has joined #ipfs
OutBackDingo has joined #ipfs
cryptotec has quit [Remote host closed the connection]
dingo__ has joined #ipfs
OutBackDingo has quit [Ping timeout: 260 seconds]
mrcstvn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
OutBackDingo has joined #ipfs
dingo__ has quit [Ping timeout: 264 seconds]
Encrypt has joined #ipfs
nessence has quit [Remote host closed the connection]
nessence has joined #ipfs
nessence has quit [Ping timeout: 240 seconds]
<victorbjelkholm>
jbenet: re ipfsbin "Though 0.3.9 means you no longer need the private API, can do everything through the public Gateway/API now." basically it's always good for independent services to maybe default to the public gateway but still have their own gateway/api as a fallback
<jbenet>
Yes, I just mean using the writable gateway instead of opening the full api up
<victorbjelkholm>
aah, now I understand. Still waking up over here
<victorbjelkholm>
That makes sense, cool
cryptotec has joined #ipfs
cryptotec has quit [Remote host closed the connection]
cryptotec has joined #ipfs
mungojelly has joined #ipfs
mungojelly has quit [Quit: ERC (IRC client for Emacs 24.5.1)]
Whispery has joined #ipfs
Zuardi has quit [Remote host closed the connection]
Zuardi has joined #ipfs
mungojelly has joined #ipfs
TheWhisper has quit [Ping timeout: 246 seconds]
mungojelly has quit [Remote host closed the connection]
mungojelly has joined #ipfs
mungojelly has quit [Quit: ERC (IRC client for Emacs 24.5.1)]
<M-davidar>
pierron (IRC): codehero said you were involved with nixos?
<pierron>
M-davidar: I am
NeoTeo has quit [Quit: ZZZzzz…]
<alterego>
M-davidar: I'm working on a QML client to the IPFS gateway at the moment. Just experimenting really, nothing serious right now.
<M-davidar>
Uh oh, now I've started two conversations simultaneously :p
OutBackDingo has quit [Quit: Leaving]
<M-davidar>
alterego (IRC): cool, I've been hacking on a qml client for matrix.org
<M-davidar>
Was going to try an ipfs one depending on how that goes
<M-davidar>
pierron (IRC): I think a few people are interested in seeing closer integration between ipfs and nix, seeing as they align quite well
<pierron>
M-davidar: I am too, I was wondering to which extend we can have the nix-hash mechanism added to ipfs?
<pierron>
M-davidar: otherwise, this would involve making key -> value mapping
<M-davidar>
pierron (IRC): I suspect having a mapping from nix to ipfs hashes would probably make the most sense
<pierron>
M-davidar: the nix-hash is different than other form of hashes, as it require you to strip the resulting hash from the binary, and it does not split the content in sub-parts.
<M-davidar>
Since they have somewhat different use cases
<pierron>
M-davidar: The question, I have then, is how do you publish such mapping without having a central authority?
<M-davidar>
pierron (IRC): you also can't verify integrity of the content with nix hashes, right?
<pierron>
M-davidar: I noticed that you can query which users have a resource, which would be a good hint for looking up the keys
<pierron>
M-davidar: it depends how the nix store is configured, but we usually have 2 hashes per projects. The hash of the recipe, and the hash of the content.
<pierron>
M-davidar: and the manifest provided by the buildfarm provide both.
<M-davidar>
Ah, I was only aware of recipe hashes
<M-davidar>
So, how are content hashes computed?
NeoTeo has joined #ipfs
<pierron>
the hash of the recipe is most often used, as it does not implies rewritting hashes.
<pierron>
the hash of the content is explain in Eelco thesis, to make it short, you prefix your file with all the indexes of self-references, and strip those, then you hash this large string.
<M-davidar>
Ah, OK, it might be possible to hook that into multihash for ipfs
<pierron>
If the nix store is configured to index by content (which is not the default), then each time a new directory is register, we compute the content stripped from self-references (but with indexes), and re-inject the computed hash in the content before writting it again to the disk
<M-davidar>
The special handling of self references isn't something that ipfs does currently though (the recommendation is to always use relative paths)
grahamperrin has quit [Quit: Leaving]
<M-davidar>
But I'm assuming that's not generally possible for the nix packages
<pierron>
indeed
OutBackDingo has joined #ipfs
OutBackDingo has quit [Remote host closed the connection]
<M-davidar>
Hmm...
bedeho has quit [Ping timeout: 256 seconds]
<pierron>
M-davidar: otherwise I would be fine with the key (nix-hash) -> content (nar file in ipfs) mapping
OutBackDingo has joined #ipfs
<pierron>
M-davidar: because the way the nix-hash is computed does not allow us to split the content easily, while ipfs hashes allow to have tree of hashes to split the content in multiple chunks
<pierron>
M-davidar: and having multiple chunks would be awesome in terms of download speed from peers.
<pierron>
M-davidar: I am just not really sure how I should implemented such mapping.
<M-davidar>
pierron (IRC): yeah. We'd probably also want ipfs to have special support for chunking nar files (as we do with tar currently)
<M-davidar>
So you said this mapping is usually provided by the build farm manifest, right?
<pierron>
M-davidar: right
<M-davidar>
How is that usually distributed?
<pierron>
M-davidar: which provide hashes which are also signed.
<pierron>
M-davidar: they are distributed through https so far.
<pierron>
M-davidar: in which case, maybe we could mirror it with ipns :/
<M-davidar>
Yeah, that would probably be the simplest option
<M-davidar>
pierron (IRC): when ipld arrives it should be easier to push key-value maps onto ipfs
<M-davidar>
Especially if you're already using something like json (?)
<M-davidar>
pierron (IRC): obviously that still requires a central authority, but isn't that somewhat inherent in the build farm model?
<pierron>
It is.
slothbag has quit [Quit: Leaving.]
<M-davidar>
Alright, so I think we need three things for a start:
<M-davidar>
Ability for ipfs to handle nar files like tar
<M-davidar>
Build farm to provide ipfs hashes for said nar files
<M-davidar>
Manifest to be published over ipns (possibly with some ipld magic eventually)
<M-davidar>
pierron (IRC): sound about right?
<M-davidar>
The first but is strictly optional, but it improves efficiency
<M-davidar>
*bit
<pierron>
M-davidar: that's sounds right, but ideally, I would prefer if the buildfarm did not have to do that.
<pierron>
What are you doing with tar files?
<pierron>
Why is tar different from other format?
zz_r04r is now known as r04r
<M-davidar>
pierron (IRC): we can (optionally) separate out files and directories in tar archive into native ipfs objects and links, for improved deduplication and transparent addressing
<pierron>
M-davidar: ok, makes sense.
<M-davidar>
So, what are your concerns re the build farm?
<pierron>
M-davidar: nar is a really simple text format in which the content of the files are copied into in a deterministic order.
<pierron>
M-davidar: my concern is gving the build farm to much responsibility
<pierron>
M-davidar: it is made mostly to build, not to serve.
<pierron>
M-davidar: we have a mirror to build the build farm uploads the content for serving the results.
<pierron>
s/a mirror to build/a mirror to serve,/
<M-davidar>
pierron (IRC): but the build farm already computes content hashes, no?
<M-davidar>
cool, so it looks like the manifest will be able to integrate nicely with ipld
<M-davidar>
but, the problem is that that's just a "raw stream of bytes" hash, rather than the nicely decomposed merkle-style hashing that ipfs uses, yes?
<M-davidar>
s/merkle-style/merkle-dag
multivac has joined #ipfs
<pierron>
M-davidar: indeed, this is the hash of the NAR file
<pierron>
M-davidar: I guess the other Hash/Size is the one of the derivation, but then I am surprized because I would expect the signature to appear somewhere in this file … :/
<M-davidar>
yeah, the mismatch between "normal" hash and "ipfs" hash is annoying sometimes
<pierron>
M-davidar: oh, the Hash / Size is probably the downloaded content, in which case we are saved.
<M-davidar>
it's not the first time this has come up
Guest73396 has joined #ipfs
<M-davidar>
pierron: i think it would be worth looking into how NAR and ipfs can be futher integrated, both from the ipfs side of being able to inspect nar files, and from the nixos side of being able to publish hashes that work nicely with ipfs
<M-davidar>
(maybe integrated isn't the right word, working towards a mutually compatible solution)
<pierron>
M-davidar: I agree, having a Merkle-dag of the unpacked content would be helpful to support hard-linking of files.
* M-davidar
brb
<pierron>
M-davidar: I don't think we would want to make ipfs a central piece of Nix, as having a merkle-dag would dramatically increase the size of the MANIFEST file
<pierron>
M-davidar: hum … maybe not.
<M-davidar>
yeah, i agree you probably wouldn't want to do that
<M-davidar>
i'm just wondering whether it would be possible to have a "NAR hash" that has the merkle-linking property that ipfs needs, but without having to embed the entire dag in the manifest (just the root hash)
<M-davidar>
so the buildfarm and ipfs could compute the same hash independently
<M-davidar>
without needing a special nix->ipfs mapping
cemerick has joined #ipfs
<pierron>
That would be awesome, but then we would also have to support new versions of nar files in ipfs, not that this might happen frequently.
mungojelly has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
<M-davidar>
pierron: there was also some talk of an "IPFS archive" format at one point, so maybe we could work towards a common standard?
* alterego
could really do with some kind of blockchain based distributed user authentication system.
<pierron>
In which case, the nar-hash -> ipfs-hash would make sense in the mean time.
<M-davidar>
yeah, that would probably be the best short-term solution, and we can discuss better but more difficult ones later
<M-davidar>
pierron: if you wanted to write to submit an issue to ipfs/notes about this stuff, I can try to direct relevant people towards it to comment
<pierron>
M-davidar: I will discuss this topic live with a few other at the first NixCon, and I will come back to discuss in more details.
<M-davidar>
s/to write//
<multivac>
M-davidar meant to say: pierron: if you wanted to submit an issue to ipfs/notes about this stuff, I can try to direct relevant people towards it to comment
<M-davidar>
pierron: cool, sounds great :)
<M-davidar>
where/when is it?
<M-davidar>
ah, berlin
<M-davidar>
ipfs is well represented in germany :)
<pierron>
M-davidar: it would be in Berlin the 14-17th of November
<M-davidar>
ping lgierth cryptix
<pierron>
M-davidar: They are still issues that would need to be addressed beyong the key->value mapping of nar files content, such as the annonimity of the receivers & senders of the blobs. Otherwise this would make it easy to detect vulnerable peers :/
<M-davidar>
pierron: as in being able to tell if people are running outdated software?
legobanana has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<pierron>
M-davidar: either by making queries to the publishers, or by accepting requests from other peers (you won't request a software that you already have)
<M-davidar>
pierron: that's already possible through other means though, isn't it?
<M-davidar>
for some things, at least (apache, etc)
Oatmeal has quit [Ping timeout: 240 seconds]
<pierron>
M-davidar: indeed, the buildfarm is technically aware of when you update your packages.
<pierron>
M-davidar: except that now, this would be anybody which is sending you a blob
<M-davidar>
pierron: yeah, I mean detecting software versions based on externally visible behaviour
<M-davidar>
like, apache printing it's version on error pages
<pierron>
M-davidar: yes, but apache does not print bash/openssl version.
<M-davidar>
that's true
<M-davidar>
although it's kind of getting into security by obscurity territory :/
cemerick has quit [Ping timeout: 250 seconds]
<pierron>
anyway, even without this security concern in mind, I tihnk this would already be an interesting thing to have.
Oatmeal has joined #ipfs
<M-davidar>
yeah, sorry, i got a little tangential ;)
<M-davidar>
anonymity is something that we've been thinking about
<M-davidar>
there's currently some work going into ipfs-over-tor, and others have expressed interest in ipfs-over-i2p
<whyrusleeping>
gnite
<M-davidar>
ipfs-over-hyperboria is also possible - it doesn't guarantee perfect anonymity, but does reduce your trust to your immediate peers (iirc)
jo_mo has joined #ipfs
<M-davidar>
there's always going to be a performance tradeoff though
<pierron>
M-davidar: I guess this could be part of the disclaimer, that people can enable such system, but if they are concerned about some kind of attacks, they should wrap connections under an annonimity service.
<pierron>
s/service/transport/
<multivac>
pierron meant to say: M-davidar: I guess this could be part of the disclaimer, that people can enable such system, but if they are concerned about some kind of attacks, they should wrap connections under an annonimity transport.
<M-davidar>
yeah, I think that would probably be reasonable
<M-davidar>
and always keep your software uptodate ;)
<M-davidar>
or, at least, download it from the network so it looks like you are
* pierron
wonder if I can make the bot play sokoban by pasting a sed script that is on his computer …
<M-davidar>
pierron: nah, it's only fake regex (the search string is verbatim)
<pierron>
M-davidar: downloading from the network would still be visible with a time-based attack on the first request
<pierron>
M-davidar: security is hard ;)
<M-davidar>
pierron: oh... I see what you mean now
<M-davidar>
yeah, I'm not a security person, that's other people's job :p
<pierron>
M-davidar: I am making lots of security issues in my work-day job, that's how I learn ;)
<pierron>
M-davidar: hopefully most of them never reach the release channel, and most of them are hard to exploit.
<pierron>
M-davidar: otherwise, you got random code execution :P (working on a JIT-compiler)
<M-davidar>
cool, which language?
<pierron>
JavaScript
<M-davidar>
that doesn't narrow it down much :p
<pierron>
SpiderMonkey ;)
<M-davidar>
ah, mozilla?
<pierron>
yes.
<M-davidar>
cool
<M-davidar>
didn't realise there were any mozilla people in here
<pierron>
Oh? Then I wonder which browser vendor you are negotiating with …?
* M-davidar
was told it was a secret :p
<pierron>
The fact that you are negotiating is in every public video.
<pierron>
but I don't see Google being interested in offloading loads out-side their servers.
<pierron>
And Apple, I don't know.
<pierron>
Microsoft, I don't know either.
<M-davidar>
yeah, I figured who it probably was, but apparently I'm not in the inner circle :p
<pierron>
What I know, if this is not Mozilla, then we would be pissed, because a lot of dev seems to be interested by decentralized web.
<M-davidar>
you'd have to ask jbenet :p
<M-davidar>
that's really cool though. Go Mozilla! :D
<pierron>
jbenet: If you are not talking with Mozilla, please pm me such that we can fix that ;)
<M-davidar>
pierron: what other decentralised web stuff is mozilla working on?
<pierron>
M-davidar: I don't know, but I know there is a lot of interest
<M-davidar>
anyway, getting late, was good talking to you pierron :)
<pierron>
M-davidar: same here. :)
<pierron>
M-davidar: I will open an issue later this month.
<M-davidar>
cool, sounds good :)
Encrypt has joined #ipfs
OutBackDingo has quit [Quit: Leaving]
OutBackDingo has joined #ipfs
cemerick has joined #ipfs
dignifiedquire has joined #ipfs
<dignifiedquire>
daviddias: jbenet having probs with the webui on the osx 0.3.9 build
<dignifiedquire>
it's not even opening anymore (fresh download from gobuilder)
<dignifiedquire>
the request just hangs
<daviddias>
that is weird
<daviddias>
it works for me
<daviddias>
shows peers and everything
<dignifiedquire>
after some time I get the strange context timeout warning
<dignifiedquire>
it seems that it's not finding the requested hash
<daviddias>
I know I'm starting to sound like a old man, but you know, what we really need is to test the webui really good with automated tests, so that before any release, everything gets checks
<dignifiedquire>
yep
<daviddias>
interesting
<dignifiedquire>
the strange thing though is that running the webui in dev mode from the repo works fine
<dignifiedquire>
so it seems that the daemon is not properly serving it
<dignifiedquire>
I don't think so to be honest, because as I said if I'm serving the development version it connects fine
<dignifiedquire>
this is the hash that fails to be fetched /ipfs/QmR9MzChjp1MdFWik7NjEjqKQMzVmBkdK3dz14A6B5Cupm
<dignifiedquire>
running ipfs cat on the hash just hangs as well
<dignifiedquire>
one question, is the webui packaged with ipfs, or does it fetch it from another node on the first startup?
<dignifiedquire>
cause I'm on airport wifi, so maybe it just fails to connect to other nodes
<dignifiedquire>
and it seems I do not have any peers connected atm
OutBackDingo has quit [Quit: Leaving]
dignifiedquire has quit [Ping timeout: 246 seconds]
<victorbjelkholm>
hm, when I use 'ipfs dht findprovs $HASH' I get one ID back, myself, which is fine. But when using the API like this: 'curl http://127.0.0.1:5001/api/v0/dht/findprovs\?arg\=$HASH' I get a lot of IDs back. Anything obvious I'm missing?
OutBackDingo has joined #ipfs
<victorbjelkholm>
I need to do some filtering myself?
<victorbjelkholm>
hm, yeah, seems like it shows all the queries that is being made, but only the ones with obj.Type being "4" is the ones that actually has it
<achin>
uh huh, i just ran into #1688 -- any known workarounds?
anticore has quit [Quit: bye]
<lgierth>
cryptix: :)
elima_ has joined #ipfs
anticore has joined #ipfs
<cryptix>
achin: uuh.. i tried to forget that one
anticore has quit [Client Quit]
anticore has joined #ipfs
anticore has quit [Client Quit]
anticore has joined #ipfs
cryptotec has quit [Remote host closed the connection]
anticore has quit [Client Quit]
anticore has joined #ipfs
anticore has quit [Client Quit]
anticore has joined #ipfs
<cryptix>
that error is from go's multipart package.. i guess nobody tried go-fuzz on it yet?
elima_ has quit [Ping timeout: 246 seconds]
<NightRa>
Just starting out with IPFS here. For some reason 'gateway.ipfs.io' can't find my hashes (it was able to earlier today).
<NightRa>
What could be the reason?
<NightRa>
I have ~73 peers in the swarm now. Is that too low and I'm not being discovered by the DHT?
<cryptix>
NightRa: the public gateways don't persist the content you request from them
<NightRa>
My IPFS daemon is open right now
<achin>
cryptix: it appears 100% reproducible for certain inputs. in my case, my file was an html file, so i just added a new line to it, and the problem went away
<cryptix>
achin: i was also able to reproduce from sonatagreen's example. i'm not too sure how to act about it. i need to see where we use that package
martinkl_ has joined #ipfs
<cryptix>
NightRa: if you have a 2nd peer you can see if you are reachable by doing 'ipfs dht findpeer $peerid'
<cryptix>
NightRa: the peerid is the hash of the key of a node, as seen in 'ipfs id'
<cryptix>
NightRa: 'ipfs ping $peerid' is also a good test
elima_ has joined #ipfs
<NightRa>
Can I somehow know the peerid of the ipfs gateway?
<cryptix>
well... its not to hard to find them.. :) what for, though?
<NightRa>
To try to ping it
atrapado has joined #ipfs
martinkl_ has quit [Client Quit]
<cryptix>
oh sure
<cryptix>
NightRa: QmSoLueR4xBeUbY9WZ9xGUUxunbKWcrNFTDAadQJmocnWm for instance
<cryptix>
you can find more in $HOME/.ipfs/config
<cryptix>
the default bootstrap nodes are also the public gateways
<pinbot>
now pinning /ipfs/Qmduc5TqSk5hWnuvBCDHQMXnPALPZ131Jz9wbU9SYTbjyB
<NightRa>
cryptix: I port forwarded 5001, 4001, and 8080
<pinbot>
[host 5] failed to grab refs for /ipfs/Qmduc5TqSk5hWnuvBCDHQMXnPALPZ131Jz9wbU9SYTbjyB: context canceled
<pinbot>
[host 2] failed to grab refs for /ipfs/Qmduc5TqSk5hWnuvBCDHQMXnPALPZ131Jz9wbU9SYTbjyB: context canceled
<cryptix>
NightRa: you dont want to forward 5001 (thats the json api, we have CORS protection but still..) you can open 8080 but only 4001 is really needed
<cryptix>
NightRa: can you give me your peerid?
<NightRa>
My id is: QmQ9XwWLvSCCHriMBPgY9hJUDVuJaYyThRHUTrLHC2ch9y
* achin
is connected to QmQ9XwWLvSCCHriMBPgY9hJUDVuJaYyThRHUTrLHC2ch9y
<NightRa>
FIY: I'm on a build from tommorow, after 3.9
<NightRa>
Maybe some sort of new bug?
<cryptix>
iirc no changes on that level recently
<NightRa>
from yesterday**
<NightRa>
Well, I'll restart now to see if it's a port rebinding problem
<cryptix>
ok
voxelot has joined #ipfs
voxelot has joined #ipfs
amiller has quit [Ping timeout: 250 seconds]
<daviddias>
if someone is into the dark arts, can someone dare to explain me why the heck these tests pass on the browser https://github.com/jbenet/js-multihash/pull/4 (without me having to do anything special to the Buffer type)
amiller has joined #ipfs
amiller is now known as Guest89870
cryptotec has joined #ipfs
grahamperrin has left #ipfs ["Leaving"]
cryptote_ has joined #ipfs
wtbrk has joined #ipfs
<cryptix>
daviddias: browserified?
cryptotec has quit [Ping timeout: 250 seconds]
Guest89870 has quit [Ping timeout: 252 seconds]
mvollrath has quit [Ping timeout: 246 seconds]
voxelot has quit [Ping timeout: 250 seconds]
<NightRa>
Well, I'm up again with a bunch of peers
<ion>
Perhaps go-ipfs should keep track of how many nodes have dialed you through each address you’re advertising. That would be a nice indicator of ingress connectivity.
<cryptix>
NightRa: this time i have you on /ip4/213.57.18.85/tcp/20665/ipfs/QmQ9XwWLvSCCHriMBPgY9hJUDVuJaYyThRHUTrLHC2ch9y
<NightRa>
Oh, weird behavior!
<cryptix>
ion: indeed
<NightRa>
Just like last time: First run of ipfs daemon, the local webui isn't populated. On the second run, it is populated, and I'm resolved by the ipfs gateway. - It was exactly the same last time
<ion>
What does Google say your IP address is this time?
<daviddias>
cryptix: 'maybe' , docs doesn't mention it gets browserified even once
<daviddias>
and also I don't know that browserify makes the magic switch of Buffer to ArrayBuffer
<NightRa>
And 149.78.48.62
<NightRa>
First run - fails everywhere. Second run - works perfectly. Afterwards: ui populated, but can't be resolved.
<ion>
So your ISP is doing weird NAT stuff that depends on the port you’re connecting to or something?
<NightRa>
IDK, maybe..
voxelot has joined #ipfs
<cryptix>
not unheard of
<kpcyrd>
welcome to the internat
<cryptix>
:)
<cryptix>
kpcyrd: maybe i can make it tomorrow - still recovering from an instaflu
<kpcyrd>
cryptix: :)
<kpcyrd>
cryptix: you know the address?
<cryptix>
kpcyrd: yup
voxelot has quit [Ping timeout: 244 seconds]
voxelot has joined #ipfs
<xelra>
Hey guys! Time for my quarterly check-in. :)
<xelra>
I wanted to ask whether it's already possible to make a private sync-net, cut-off from the public. Last time I asked, it was a planned feature.
<ion>
Still planned
<xelra>
I see, thanks.
<cryptix>
xelra: yup - latest development is a PR that adds a sharedkey
<cryptix>
so only nodes with that key can form a swarm
<xelra>
That's great news.
<ion>
Where is that PR?
<cryptix>
numbers... sigh..
<Stskeeps>
cryptix: form a swarm around a particular ipfs hash?
<Stskeeps>
oh that's neat
<xelra>
That's always been my usecase for ipfs. Use it as a replacement for a distributed filesystem. Where I can spread my private files across multiple servers and direct access them, with striping and stuff.
<cryptix>
Stskeeps: not sure we talk of the same thing ^^
<Stskeeps>
cryptix: well, maybe not; but what i imagined when you said it, that you could need a key in other to participate in a swarm around a certain ipfs hash
<cryptix>
i was confident it was a PR.. oh well
<Stskeeps>
and until then you couldn't even see the participants in the swarm
<Stskeeps>
seems like we're not talking about the same thing :)
<cryptix>
Stskeeps: xelra (and others) want to create independant networks (swarms) of ipfs nodes to share private content
<Stskeeps>
yeah
<xelra>
This -> "(harder impl): use a PKI and only connect to trusted nodes who either" <- what jbenet said there.
<ion>
cryptix: Thanks. Huh, i searched for “shared key” but I must have managed to miss that one in the list.
<cryptix>
you can try so today by setting up bootstrap nodes yourself but you can still be dialed if you are find
<cryptix>
found*
<ion>
cryptix: Oh, my search only returned pull requests, not issues.
<cryptix>
ion: yea..
<cryptix>
im sure there was discussion which lead to whyrusleeping opening that one-liner issue
<cryptix>
there were others requesting this lately
<ion>
cryptix: That’s just an issue, though. You said PR, right?
<cryptix>
yea.. now i'm not sure the idea made it into a PR
<xelra>
I want to make a little pinbot that either in a simple form simulates RAID6 and mirrors and stripes every file I put into that private network or in a more complicated form pins files across the private network with a more modern dfs algorithm like paxos.
<xelra>
And replace Andrew FS with that.
ygrek has joined #ipfs
<Stskeeps>
cryptix: that said; what if you could actually restrict access to the info of what nodes provide a ipfs object, so instead of getting the node list, you'd get the encrypted node list, and then you'd use your shared key to decrypt the node list on a local node?
domanic has joined #ipfs
<xelra>
It's always been a dream of mine to have one filesystem across multiple devices over wan. I'd looked into Hadoop FS for that, but it performs badly over WAN. And the discofs stuff is totally closed.
<xelra>
So I thought that ipfs in a closed network would provide just that. :)
<ion>
achin: Nice, but the hashless chunks don’t seem to be clickable.
<achin>
yeah, you can only click on hashes at the moment
<ion>
achin: I clicked on the oldest chunk and a “Chunk offset:0“ appeared above it.
Whispery is now known as TheWhisper
<achin>
yeah, it's a dumb bug, to be fixed
<achin>
there's actually a lot of dumb bugs, i thought i'd dump it here before going AFK for lunch
Matoro has quit [Ping timeout: 255 seconds]
voxelot has quit [Ping timeout: 256 seconds]
nessence has joined #ipfs
domanic has quit [Ping timeout: 250 seconds]
mrcstvn has joined #ipfs
<whyrusleeping>
g'morning
bedeho has quit [Ping timeout: 255 seconds]
<cryptix>
hey whyrusleeping!
<whyrusleeping>
cryptix: lol, i thought i pushed fixes for that PR before leaving last night
<cryptix>
yup but bad ones ;)
<cryptix>
(you added one param, not two)
<whyrusleeping>
lol,thats what i meant!
<whyrusleeping>
i didnt push that fix
mildred has joined #ipfs
<cryptix>
heh okay - can you give me digest of how ipns resolv works? in particular why the node needs to be online
<cryptix>
in theory i would just need the published record and a copy of the publishers public key, no?
Matoro has joined #ipfs
amiller has joined #ipfs
<whyrusleeping>
the node doesnt need to be online
<whyrusleeping>
but the node who creates the key is the one responsible for continuing to publish it every so often
<cryptix>
okay - lets see how 1887 improves things
Encrypt has quit [Quit: Dinner time!]
mvollrath has joined #ipfs
bedeho has joined #ipfs
wtbrk has quit [Quit: Leaving]
<whyrusleeping>
yeah, it should help a lot!
tilgovi has joined #ipfs
tilgovi has quit [Remote host closed the connection]
anticore has quit [Remote host closed the connection]
<ion>
whyrusleeping: If you feel like pushing your feat/ipns-cache, I can rebase my changes on top.
Matoro has quit [Ping timeout: 260 seconds]
<whyrusleeping>
ion: yeap! just gotta finish making breakfast
anticore has joined #ipfs
patcon has joined #ipfs
mrcstvn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed feat/ipns-cache from 26d7561 to 4342b8e: http://git.io/vWWp5
<ipfsbot>
go-ipfs/feat/ipns-cache 4342b8e Jeromy: cache resolve cache to lru to avoid leaking memory...
<whyrusleeping>
ion: there we go
mildred has quit [Ping timeout: 250 seconds]
<whyrusleeping>
ion: reading your changes, we dont want to remove cachelife
<whyrusleeping>
the publisher does not get to decide how long someone else holds their record for
<whyrusleeping>
they can set an upper limit
<whyrusleeping>
but the node storing the record gets to make that decision
<ion>
whyrusleeping: But isn't the only problem memory consumption? And that is bounded by the LRU cache. Expired entries stay in the cache anyway until being removed by LRU or being getting queried cacheGet noticing. Unless you want to implement additional logic to prune the cache?
tmberg has joined #ipfs
<ion>
+and
* whyrusleeping
tries to remember if there was an issue besides memory
domanic has joined #ipfs
elima_ has quit [Ping timeout: 240 seconds]
rendar has quit [Ping timeout: 260 seconds]
ygrek has quit [Ping timeout: 240 seconds]
<whyrusleeping>
hrm...
<whyrusleeping>
you might be right
<ion>
If I, the owner of the querying node, keep querying for a record with a long TTL often enough to keep it in the LRU cache, isn't it just good for *me* to keep it cached?
akkad has quit [Excess Flood]
akkad has joined #ipfs
<cryptix>
sounds good to me :)
rendar has joined #ipfs
* whyrusleeping
is having a hard time with logic this morning, he will take ions word for it
martinkl_ has joined #ipfs
<whyrusleeping>
cryptix: ion in the case that the record does not have a ttl set
<whyrusleeping>
i'm torn between not caching the record, or using some default ttl
domanic has quit [Ping timeout: 240 seconds]
<whyrusleeping>
er, no ttl OR eol
HoboPrimate has quit [Remote host closed the connection]
cemerick has joined #ipfs
elima has joined #ipfs
vanila has joined #ipfs
<ion>
whyrusleeping: My commit defaults to 1 minute if it's unset.
<whyrusleeping>
need to give you collab to make it easier for me to steal your commits...
<whyrusleeping>
its so hard to pull commits from other repos (where 'so hard' is defined as: 'more typing of urls than i want to do')
<cryptix>
haha
<cryptix>
(did some work on git-remote-ipfs today \o/)
<whyrusleeping>
ion: i hope youre not offended by my just working those changes into my current changeset.
<whyrusleeping>
cryptix: awesome!
<ion>
whyrusleeping: I was expecting that, the commit history for the PR would have been dirty otherwise.
<ion>
whyrusleeping: I just finished eating. Would you still like me to rebase my branch?
<whyrusleeping>
ion: still working on this
<whyrusleeping>
i'll let you know
<ion>
It was just two small conflicts, already rebasd.
bedeho has quit [Ping timeout: 240 seconds]
<whyrusleeping>
lol, i'm going to push again though
<ion>
no problem
Matoro has joined #ipfs
<alterego>
Is there a way I can override the location of the ~/.ipfs directory?
<ion>
IPFS_PATH IIRC
<alterego>
Awesome
<alterego>
Thanks
domanic has joined #ipfs
<cryptix>
yup that's it
<alterego>
:)
<cryptix>
achin: that irclogs viewer is very interesting
<jbenet>
xelra cryptix ion: the private network is super, super easy to add. could do it in one day.
NeoTeo has quit [Quit: ZZZzzz…]
<alterego>
Where are the list of output formats you can use with -f ?
<revolve>
jbenet: are you considering adding private networks to ipfs?
<alterego>
I like the idea of a VPN on top of IPFS, where the path hash is actually the hash of an encrypted hash, which could only be resolved by nodes that have the private key to decrypt it.
<alterego>
revolve: isn't that basically what IPFS does?
<revolve>
you could run that today and have decentralised revisions of hypermedia resources
<revolve>
with the anchor tags intact.
<alterego>
Oh, right, it caches HTTP responses?
<revolve>
yeah
<revolve>
it's a caching proxy
<alterego>
Cool
<revolve>
you could also chuck stuff in from your local machine and have that be content addressable
<revolve>
or, the machine you reach the proxy from, considering
<revolve>
I think the best way to decentralise the web is possibly an http proxy on ipfs
<alterego>
I think one day we may have a browser plugin that for every pay you visit on say, wikipedia, it downloads and adds to ipfs.
<alterego>
Yes, exactly.
<alterego>
s/pay/page/
<multivac>
alterego meant to say: I think one day we may have a browser plugin that for every page you visit on say, wikipedia, it downloads and adds to ipfs.
<revolve>
ipfs is just the first thing doing this
<revolve>
dude that's too specific
<alterego>
Damnit I hate those bots :P
<alterego>
revolve: well, I know that example was specific, but you can't go around caching every page, some pages are dynamic.
<revolve>
multivac: a proxy transforms urls to make sure other assets are pulled through it
<alterego>
Like, really dynamic and not really content driven sites.
<revolve>
yeah
<revolve>
this thing is really for the static stuff like wikipedia, blogs, news articles
<alterego>
Hence my example ;)
<revolve>
but it also has some other neat properties
<revolve>
because it deals in multiple overlay networks where each node has a public/private keypair and is intended to let multiple people /edit the same page in at the same time/
<revolve>
it could also do encrypted direct-to-peer chat within it
<revolve>
protocol still stays simple, it's just a variant of the edit rpc for shooting edit data back and forth over the internet
<jbenet>
daviddias: the interface of js-multihash did not change, right? bump patch? or bump minor?
<xelra>
jbenet: One day? I'll take your word for it. :)
<jbenet>
revolve: how that's done is actually a pretty big deal. what you would want to do is what PeerCDN did, having the source provide hashes (could maybe do it from an ETag) and use those as the source of truth. subresource integrity will help, too.
<jbenet>
xelra: ok.
<xelra>
I want to make a distributed filesystem, private, with ipfs and run a pinbot that simulates something like raid6 across my servers. It will have all my private files, why I really want it closed off.
<xelra>
Then stripe to my mobile devices and the laptop on the go.
<revolve>
cryptix: excellent
<revolve>
jbenet: how does that work in the event of the original source going offline?
<jbenet>
xelra: help us build some of this stuff. we want it too. see https://github.com/ipfs/notes/issues/58 <-- that's meant to operate as a RAID array over a set of nodes.
<jbenet>
revolve: it doesn't in those cases, but you can't just cache HTTP -- the whole thing is mostly dynamic now. it will be a nightmare without proper accounting of what's right. (i.e. for things like wikipedia and twitter you have to understand the underlying data)
<jbenet>
the wayback machine is the only sane way to do _that_ and they've also run into the problem that most of the dynamic web is user dependent now.
simonv3 has joined #ipfs
<revolve>
things like recording a twitter stream as a series of events to replay?
<ion>
whyrusleeping: Also i’m unfortunately @ion1 on GitHub, @ion was taken. :-)
<cryptix>
i wonder how long before twitter would kill api access of a hashchain-like mirror..
<kpcyrd>
are there any opinions on adding the cjdns addresses to the bootstrap config? I have ipfs on a server which is fc00::/8 only right now.
<cryptix>
kpcyrd: why cant you add it?
<cryptix>
oh
<cryptix>
until relay is in you wouldnt have much fun with a cjdns-only node
cemerick has quit [Remote host closed the connection]
mrcstvn has joined #ipfs
<revolve>
jbenet: what's the best way of trusting the initial offering of a resource, have site admins purposefully include a hash somewhere?
<kpcyrd>
cryptix: I know. It /could/ work out of the box for cjdns2cjdns and most of the ipfs servers have fc00::/8 anyway :)
<revolve>
there has to be a smart way of doing this entirely from the visitor side
<cryptix>
kpcyrd: only to the point when you want to get objects from non cjdns nodes
<kpcyrd>
cryptix: unless it's already available from a cjdns node :)
<kpcyrd>
cryptix: do you know if ipns is relayed?
<kpcyrd>
publish it from cjdns only, resolve from somewhere else
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
tmberg has left #ipfs [#ipfs]
ygrek has quit [Ping timeout: 265 seconds]
r04r is now known as zz_r04r
amade has quit [Quit: leaving]
Encrypt has joined #ipfs
bedeho has joined #ipfs
cryptote_ has quit [Remote host closed the connection]
cryptotec has joined #ipfs
cryptotec has quit [Ping timeout: 260 seconds]
<HoboPrimate>
cool, I didn't know that the centralized, decentralized and distribute networks images jbenet shows on talks where part of the design of the internet : http://www.computerhistory.org/internethistory/1960s/
<alterego>
If I want to manually create an object, how do I specify links on the command line? The only option I see with ipfs object new is <template> ?
pfraze has joined #ipfs
reit has quit [Quit: Leaving]
<kpcyrd>
I think I've added too much to ipfs. every command takes multiple seconds without a daemon running
<sonatagreen>
'ipfs object new' with no further arguments will give you an empty object (no links, no data), and then you can build it up with 'ipfs object patch [stuff]'
<sonatagreen>
there may be a better way than this but that's what I've figured out so far
bauruine has joined #ipfs
<sonatagreen>
...probably what you want is 'ipfs object put' actually
<alterego>
sonatagreen: yeah, I figured that, I've actually found a solution. I can pipe JSON directly to ipfs object put, if the object is a directory then "Data" has to equal "\u0008\u0001"
<sonatagreen>
nice
<sonatagreen>
(if the object is a directory then you may want 'ipfs add -r' instead)
<alterego>
Sure, if I'm using ipfs add, but I want to make this directory virtually, it isn't actually me pushing data from my filesystem.
<sonatagreen>
aha
<alterego>
So the directory doesn't actually exist ;) I'm making one up.
pfraze has quit [Remote host closed the connection]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<alterego>
ls
Matoro has quit [Ping timeout: 265 seconds]
amstocker has joined #ipfs
bedeho has quit [Ping timeout: 240 seconds]
jfntn has joined #ipfs
<whyrusleeping>
ipfs object new unixfs-dir <-- gives you an empty directory
<whyrusleeping>
alterego: on current master, youll want to use 'ipfs object patch'
<whyrusleeping>
in 0.4.0, you'll be able to use the 'ipfs files' api
<whyrusleeping>
which is basically a nice set of wrappers around patch
<kpcyrd>
whyrusleeping: something I'd like to have is `echo Qmsomething filename.txt | ipfs magic`
<achin>
what would that do, kpcyrd ?
<ion>
magic
<kpcyrd>
achin: empty folder with filename.txt in it
<kpcyrd>
whyrusleeping: that way I could hash large directories on multiple nodes and then create one directory that would be too huge for a single `ipfs add -r`
<achin>
(with an empty Data, filesize, and blocksize fields)
<alterego>
achin, whyrusleeping: Well, I've decided to not use unixfs and am instead going to just create my own custom objects, so meh.
<achin>
kpcyrd: unless i'm missing something, you can do that with "patch"
<kpcyrd>
achin: yes, but it would be much easier that way
<achin>
alterego: i too am using my own custom object. i think this will be common for apps with non-file-system data
<ion>
whyrusleeping: It’s a bit annoying that you lose static typing in Go if you want to implement a container type that is not one of the built-in special types. :-\
* kpcyrd
hasn't looked into patch yet
<alterego>
achin: The fact we can do this is really cool too! :D
Matoro has joined #ipfs
<achin>
alterego: yeah! the IPFS merkledag objects are generic enough to be usable for anything
<achin>
they'd be slightly more usable with better metadata support, but 1) i think this is comming and 2) this is not fatal
<ion>
0.4.0 will use a new format.
<ion>
AFAI
<ion>
U
<achin>
AEIOU
<alterego>
achin: indeed, I don't agree with how unixfs is using data as effectively a file/object type description.
<ion>
achin: AEIOUÅÄÖ
<achin>
ion: showoff. i don't have those letters on my keyboard!
<alterego>
achin: it would make more sense having a "Type" field like there's a "Links" field, and a "Data" field.
<ion>
achin: I mean AEIOUYÅÄÖ of course
<achin>
alterego: what would you put in the "Type" field?
<mungojelly>
hmm, ipfs is sorta an encapsulation or recapitulation or collapsing of everything, so it makes sense that now we're debating typing
<alterego>
achin: unixfs-dir, alteregos-custom-type, etc.
<achin>
would you have some type of registry of data types?
<achin>
ask people to use strings that we hope don't collide? ask people to use large UUID numbers?
<alterego>
achin: well, there would be "official" datatypes, but they can obviously be ignored at the application level anyway.
<mungojelly>
we can reduce everything to this one simple thing! yay! begone all complications! now how do we deal with dynamic things? what about typing? oh darn everything has followed us here. :D
<alterego>
achin: UUIDs would be probably best, indeed.
<ion>
alterego: IPFS hashes are a better UUID.
<achin>
ion: do you know if that's on track for 0.4.0?
<alterego>
ion: then just use IPFS hashes instead, like an RDFS type.
<alterego>
ion: unixfs could be /ipns/<UNIXFS_DIR_HASH>
<ion>
alterego: Please see the link above.
<ion>
achin: I’m under the impression that 0.4.0 will not happen without that.
<achin>
using the schema hash is clever. i like it
<alterego>
ion: Ah, yes, I remember reading about the use of JSONLD like stuff. That's cool :)
<alterego>
In a way IPFS actually maps so well with RDF.
<alterego>
ion: but will the use if ipfsld remove the need for "Data" in a unix dir object?
* achin
is really forward to the ipns caching PR
<ion>
alterego: I think so.
Encrypt has quit [Quit: Sleeping time!]
<achin>
a full 60 seconds to resolve an ipns name!
<ion>
nice
<alterego>
I'd like a more permenant IPNS right now it seems that they expire after 24hrs?
<achin>
only if you don't keep your daemon running -- it'll automatically refresh the records
<jbenet>
alterego: they dont expire, just need to keep republishing to maintain it on the dht
<jbenet>
this is dht specific
kuroshi is now known as kode54
<alterego>
jbenet: but what happens if you stop publishing when someone triest to get the IPNS hash?
<jbenet>
achin sorry thats awful. i'm being very strict about consistency because it really really matters. We need to clean up our dht code to make this faster, this is revealing problems atm.
mrcstvn has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<jbenet>
(like inefficiencies)
<ion>
alterego: It will disappear from the DHT after a time.
<jbenet>
alterego: if they cant find it they cant use it. we're working through some designs around delegating republishers of records-- this is an important security concern, etc.
<achin>
jbenet: i have no idea if this is easy or not, but perhaps a short term fix could be: if you ask your own peer to look up its own address, that should be instant
<jbenet>
alterego: think of it like this, you're your own DNS name server, others can cache the records and redistribute them for you, but they have to expire to ensure the network is safe.
<jbenet>
achin: it is instant locally-- or it should be. if it isn't that's a bug
<jbenet>
(file it)
<achin>
jbenet: it takes 60 seconds for me :)
<alterego>
jbenet: ok, well as long as it's WIP that's cool. It's just I'm working on something that I really need to have perminant dynamic data, and if IPNS is currently the only way I can get a link to the HEAD of the data, then ...
<ion>
Yeah, it takes some time to resolve your own node id through IPNS.
<jbenet>
achin: yeah that's broken. it should be instant for yourself.
mrcstvn has joined #ipfs
<whyrusleeping>
jbenet: no, we decided that it shouldnt be instant for yourself
<jbenet>
welllllll actually, you might have to scan the network in case you've lost local data-- (amnesiac node). ok it's not always supposed to be instant-- but it certainly should be fast
<jbenet>
whyrusleeping yep yep, i just recalled
<ion>
achin: When go-ipfs supports having multiple keys and sharing them between nodes, it will be reasonable to check whether someone else with the private key has published a newer version.
<ion>
But it’s probably reasonable to assume your node id isn’t shared.
<jbenet>
exhausting the dht routes is the problem i believe. whyrusleeping we should look into the dht query code and see what's going on in there. we need to make sure we're following the XOR gradient well.
<achin>
ion: sure, that's reasonable indeed. but at the moment, it's 1 key per node
<whyrusleeping>
jbenet: dials just take forever
<jbenet>
cc daviddias o/ as he's looked into it too and has concerns as welll
<achin>
when i'm looking up my own node, who do i have to dial?
<whyrusleeping>
i was looking at the queries earlier, and the ids are getting closer as expected
<whyrusleeping>
but the time between 'querying node X'
<whyrusleeping>
and 'result from node X' is really high
<jbenet>
whyrusleeping: are they always monotonically improving? we could write a tool + test case to demonstrate this actually.
<jbenet>
(btw, there is a case when they dont increase monotonically, the s/kademlia resolution which is slower but more secure, but we probably should put that in correctly (we're not doing proper s/kad yet) and make it a togglable switch too
<jbenet>
dont = should not
<achin>
it's interesting how often the time to ipns-resolve is a multiple of 5 seconds
<whyrusleeping>
achin: likely because we have a 5 second timeout somewhere
<jbenet>
whyrusleeping <3
<jbenet>
whyrusleeping <3 <3 <3
<ion>
What about that screenshot?
<whyrusleeping>
although the majority of my time today has been doing laundry and cleaning the house
<whyrusleeping>
ion: building a tool to visualize dht queries
<ion>
nice
<jbenet>
its possible that 5sec timeout is making something behave serially-- probably the dht queries actually. maybe increasing concurrency will help, at the cost of network traffic.