<dread-alexandria>
if anyones interested in trying out publishing or sending tips w/ comments, you’ll need to install a florincoin wallet and a library daemon, lemme know and ill get you a link ;)
equim has quit [Ping timeout: 256 seconds]
<whyrusleeping>
dread-alexandria: oooOooo!
equim has joined #ipfs
headbite has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<dread-alexandria>
@whyrusleeping - i take it thats a good oooOooo?
williamcotton has joined #ipfs
williamcotton has quit [Ping timeout: 252 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<dPow>
Has anyone made a simple text editor integrated into ipfs yet?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Blame>
dPow: If it is on the browser end, then the lack of writing ability is a problem.
williamcotton has joined #ipfs
headbite has quit [Quit: Leaving.]
headbite has joined #ipfs
williamcotton has quit [Ping timeout: 256 seconds]
pfraze has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has quit [Ping timeout: 265 seconds]
kbala has quit [Quit: Connection closed for inactivity]
headbite has quit [Read error: Connection reset by peer]
dread-alexandria has quit [Quit: dread-alexandria]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
headbite has joined #ipfs
hellertime has quit [Quit: Leaving.]
pfraze has joined #ipfs
warner has joined #ipfs
dread-alexandria has joined #ipfs
nell has joined #ipfs
<nell>
yo
williamcotton has joined #ipfs
williamcotton has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
www has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] wking pushed 1 new commit to tk/unixfs-ls: http://git.io/vIrhi
<ipfsbot>
go-ipfs/tk/unixfs-ls 30f7819 W. Trevor King: core/commands/unixfs/ls: Set Argument in JSON output...
williamcotton has joined #ipfs
kbala has joined #ipfs
williamcotton has quit [Ping timeout: 258 seconds]
spikebike has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
zabirauf_ has joined #ipfs
sharky has quit [Ping timeout: 245 seconds]
sharky has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has quit [Remote host closed the connection]
m0ns00n has quit [Ping timeout: 244 seconds]
Tv` has quit [Quit: Connection closed for inactivity]
zabirauf_ has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
brab has joined #ipfs
tso has quit [Ping timeout: 256 seconds]
Bat`O has quit [Ping timeout: 256 seconds]
Bat`O has joined #ipfs
brab has quit [Quit: ERC (IRC client for Emacs 24.5.1)]
brab has joined #ipfs
<bedeho2>
jbenet: what do you think of siacoin which just went public? seems very much like filecoin?
cjdmax has joined #ipfs
Wallacoloo has quit [Ping timeout: 265 seconds]
mildred has joined #ipfs
zen|merge has joined #ipfs
zen|merge has joined #ipfs
zabirauf_ has joined #ipfs
mildred has quit [Ping timeout: 272 seconds]
inconshreveable has quit [Read error: Connection reset by peer]
inconshr_ has joined #ipfs
ianopolous has joined #ipfs
mildred has joined #ipfs
ianopolous has quit [Remote host closed the connection]
ianopolous has joined #ipfs
timgws has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<jbenet>
hey whyrusleeping still around?
<jbenet>
bedeho2: they went public a long time ago, i think the recent event was just shipping the test net.
<jbenet>
(into real net)
<spikebike>
jbenet: know of any apps built on ipfs?
<bedeho2>
I see. Could it be used with ipfs?
<bedeho2>
part of bitswap some how
<bedeho2>
spikebike: I think project alexandria, the torrent directory, isusing it, they just launched
<spikebike>
bedeho2: cool, I'm working on a p2p backup widget which looks like it could potentially layer on top of ipfs well.
<jbenet>
spikebike: many. so a simple one is our own webui-- shipped with ipfs. Eris https://erisindustries.com/ uses IPFS for all the data (they've made several apps, including a decentralized youtube https://github.com/eris-ltd/2gather ) and they serve as a platform for others (so eris apps use IPFS too). There's also Alexandria P2P http://blocktech.com/ which just
<jbenet>
shipped their beta. there's other apps too. On the "less app" and more "website" / "service" / "tool", there's Neocities https://neocities.org , which uses IPFS to archive + replicate tons of websites for people, and will be using it much more closely. lots of static sites, lots of replication of http web, lots of git hosting
<spikebike>
cool, I'll check them out.
<spikebike>
The API, pinning, and CAS I think is all I need.
<spikebike>
already have the directory walker, metadata database, encryption, etc done
zabirauf_ has quit [Ping timeout: 244 seconds]
<jbenet>
spikebike nice
<jbenet>
spikebike we wanted to work on this at some point. we have some requirements for it-- maybe we can work on it together if our goals align
brab has quit [Quit: ERC (IRC client for Emacs 24.5.1)]
<spikebike>
What I have is already in go, got shelved for awhile, and now just starting up again. Just watched the IPFS related videos I could find
<spikebike>
not read the paper yet though
<spikebike>
but generally I want a nice easy to install go client, nice simple UI, and the ability for the user to set the replication they want among peers that are autodiscovered or manually adde.d
ianopolous has quit [Read error: No route to host]
* spikebike
tries ls /ipfs/QmaPSNuRvRpf8ZsUvDpWL1TsZVcQ4j9W7JmW6iZXHtVf2b and fails
<jbenet>
spikebike: ipfs swarm peers ?
<spikebike>
ah, much better, didn't start the daemon yet
<spikebike>
ah, IPv4 only so far?
<jbenet>
haha
<jbenet>
spikebike: no, ipv6 works
<jbenet>
i think we just dont have the gateways on ipv6. we should soon
<spikebike>
ah, I ported one of the go DHTs to IPv6
<spikebike>
tectu's
<spikebike>
or clsoe
<spikebike>
grr, I keep trying to cut and paste from a youtube window of a console
<spikebike>
ah, heh, my current p2p backup widget uses protocol buffers as well
<spikebike>
jbenet: but yeah I'm all for collaborating, planning on 100% open source, and happy to collaborate where it makes sense
<jbenet>
spikebike: sounds good
<spikebike>
I obviously have quite a bit to learn about ipfs first.
inconshr_ has quit [Remote host closed the connection]
ianopolous has joined #ipfs
<ianopolous>
hi guys, when starting a private ipfs network, is there anything special that the first node needs to be told? When I start one after removing all the bootstrap nodes i get ERROR core/serve: Path Resolve error: context deadline exceeded gateway_handler.go:436
domanic has joined #ipfs
<jbenet>
ianopolous: interesting, always? immediately? or when?
<jbenet>
domanic: are you going to nodeconf?
<domanic>
jbenet, in ca? no
<domanic>
jbenet, I really need to find a something to get me to ca though, just so I can go sailing with substack
<jbenet>
i'd say come visit me-- or we can throw a conf, but i'm about to leave
<substack>
domanic: I need to get a working motor to power me 30m back into my upwind slip!
<substack>
but probably if you're in town we can just go sailing anyways and push the boat back like I've done before scampering over the docks
<domanic>
substack, we can make a sculling oar!
<jbenet>
substack: are you back?
<substack>
jbenet: yep
<ianopolous>
jbenet: upon first run after install
<substack>
domanic: yes! that would be a great project
<jbenet>
ianopolous and it worked fine-- no peers connected, and without that error.
sharky has joined #ipfs
<ianopolous>
jbenet: with that config it works fine. Thanks!
zabirauf_ has joined #ipfs
<zabirauf_>
Hi
<jbenet>
ianopolous: interesting, what were you doing differently/
<jbenet>
?
<ianopolous>
jbenet: if I connect to the global IPFS to download the webui, will I be able to remove the external node addresses afterwards?
<ianopolous>
jbenet: I didn't have the config line
<zabirauf_>
@jbenet: Where is n.Encoded(false) defined, its being used in go-ipfs/merkledag/node.go
<ianopolous>
ah nope. I spoke too soon, it gives the same error
<jbenet>
ianopolous yep, you can. there may be other nodes that remember you for a while, but your addresses will be purged out of the system in a few minutes
<jbenet>
ianopolous: so fine to connect, download, disconnect.
<daviddias>
Good day all :)
<daviddias>
Today is Portugal day, thought it was important to share :P
<jbenet>
zabirauf_ i think it's in merkledag/coding.go ?
<ianopolous>
jbenet: I've got a bizarre situation (repeatable) where I start with "ipfs daemon --mount" after chowning /ipfs and /ipns to ian, and once ipfs starts, the two directories are owned by root again, so any writes fail..
<jbenet>
hmm
<jbenet>
osx?
<jbenet>
ianopolous ?
<ianopolous>
mint linux
<jbenet>
oh, hm. fuse is usually fine on linux.
<jbenet>
ianopolous: mind filing a bug with a way to get the reproducible build?
<jbenet>
ianopolous: (maybe use a docker container, happy to pin it for you)
<ianopolous>
jbenet: yeah I've not used it before, so may have done something stupid, but followed the instructions. I'll file a bug tonight after work.
<whyrusleeping>
gonna investigate the bitswap memory leak today
<jbenet>
whyrusleeping: more people should be on sprintbot's list
<jbenet>
whyrusleeping: the spdy stuff is more important to get through I think
<lgierth>
checking: nginx now prefers the local gateway, and only falls back to the others when it's down. check it out on neptune
<whyrusleeping>
jbenet: okay, i'll do the spdy stuff then
<okket>
spdy is http2 now
<jbenet>
sprintbot: made https://github.com/ipfs/fs-stress-test tested with 3GB of small files and it worked fine. But this is revealing places we can speed up. CRing, etc. today I'm getting to the dht and records spec so we have something for node-ipfs and the protocol framework daviddias is making
<jbenet>
okket yes but not as far as node stream implementations are concerned (there's good spdy ones not http2 iirc). We'll do http2 eventually.
<daviddias>
awesome :)
<jbenet>
lgierth that's great :)
<lgierth>
say that again when you've seen the code :)
<daviddias>
sprintbot: worked on the Protocol Driven Development Framework. Created Test Spec and Compliance tests for node-multistream implementation. Working on the "example case for PDD on node-multistream" (writing it all up)
williamcotton has joined #ipfs
<lgierth>
oh hey, does sprintbot record the people's checkins?
<lgierth>
should we highlight the bot?
<krl>
have been distracted building a bookshelf and trying to set up windows vm for electron testing
<krl>
electron installer first, then will write up a draft for how i think the api/apps could work, and move towards modularizing the webui
<whyrusleeping>
lgierth: sprintbot doesnt record things
<whyrusleeping>
i can make him do that if we want though
<jbenet>
whyrusleeping: record and post to gh sprint?
<whyrusleeping>
jbenet: i could do that
<lgierth>
nah it's fine
<whyrusleeping>
gotta figure out how to do github auth
<lgierth>
imho we don't need that papertrail
<dPow>
I'm with lgierth
<whyrusleeping>
fair
<dPow>
That's a little excessive
<jbenet>
(it's here in logs anyway)
<lgierth>
right!
<whyrusleeping>
jbenet: who else needs to be in the sprintbot list?
<jbenet>
krl: ie gives windows VMs I think. Can also test with azure cloud
<jbenet>
krl: I think getting electron to work as bret said in https://github.com/ipfs/electron-app/issues/8 may be more useful than working on windows. Cause right now UX is not good enough to ship it
<pjz>
pining a hash means 'always keep a local copy of this', right?
williamcotton has quit [Read error: No route to host]
williamcotton has joined #ipfs
<pjz>
is thre a way to set a kind of 'group' pin? make X machines working together make sure that the specified hash has at least N copies?
<whyrusleeping>
pjz: not directly with ipfs
<whyrusleeping>
ipfs is built upon the assumption that you cannot trust anyone else
<pjz>
ah
<pjz>
good assumption :)
<whyrusleeping>
we will build other services that allow you to do just that though :)
cjdmax has quit [Remote host closed the connection]
<pjz>
is ther a way to have a private dir on ipfs? that only people with a certain key can access?
<pjz>
or do I need to roll my own ?
<whyrusleeping>
pjz: at this moment, no. but very soon
<whyrusleeping>
we are working on a keystore
<whyrusleeping>
that will let you seamlessly encrypt content and share keys
<whyrusleeping>
(seamlessly encrypt and decrypt)
<Encrypt>
Yep :}
<whyrusleeping>
Encrypt: you must get irc mentions constantly
<Encrypt>
whyrusleeping, Well... from time to time :)
<pjz>
so I could like 'ipfs add key <xxx>' and then suddenly some previously-encrypted dirs would be accessible?
<pjz>
oop, I guess it'd be more like 'ipfs key add <xxx>'
<whyrusleeping>
pjz: yeap!
inconshr_ has joined #ipfs
<whyrusleeping>
there will also be a command that will allow you to send or receive a key from a given peer
<whyrusleeping>
so you can say "ipfs key send KEY QmWhyrusleeping"
<pjz>
oh, interesting
<whyrusleeping>
and i would say "ipfs key recv KEY QmPjz"
<whyrusleeping>
(QmXXX being our peer ID's
<pjz>
ipfs key recv --list
<mmuller_>
right. so "encrypt using the key of the recipient"
<whyrusleeping>
and that manages sending it encryped using pki
dread-alexandria has joined #ipfs
inconshreveable has quit [Ping timeout: 256 seconds]
<whyrusleeping>
RIP mars, 9 days uptime
marklock has joined #ipfs
<lgierth>
whyrusleeping: what is it with mars' uptime?
<whyrusleeping>
lgierth: mars is one of the bootstrap nodes, and the bootstrap nodes have terrible uptime because they get pounded by everything constantly
<whyrusleeping>
so mars being up for 9 days is the best i've seen in a long while
<whyrusleeping>
i love that this works so nicely ^
<lgierth>
mmh. neptune is up for 194 days
<spikebike>
"pounded by everything"?
<lgierth>
ooh or do you mean ipfs?
<whyrusleeping>
lgierth: ooh, i meant the daemons uptime
<lgierth>
yeah ok
<spikebike>
how many ipfs nodes are out there?
<whyrusleeping>
spikebike: between 70 and 110
<lgierth>
on the others we're restarting regularly now
<whyrusleeping>
gateway.ipfs.io also goes to the gateway nodes
<jbenet>
whyrusleeping: yeah it's nice
<spikebike>
whyrusleeping: introducing 70-110 nodes to a DHT when they reboot doesn't seem particularly intensive... do they handle other things as well (besides the gateway)
<jbenet>
spikebike: there's many more nodes that come up briefly and shut down. the gateway is more intensive part though-- and is getting more and more traffic. but you're right. mostly our lack of optimizations.
* pjz
is kind of wondering if ipfs will obsolete syncthing
<whyrusleeping>
spikebike: yeah, this is entirely lack of optimizations
<pjz>
once you get the private dirs going, anyway
<whyrusleeping>
little memory leaks for the most part
<spikebike>
I mentioned IPv6 last night and jbenet mentioned the lack of ipv6 gateways. Will nodes not find ipv6 peers unless bootstrapped on a ipv6 gateway?
<whyrusleeping>
spikebike: nope, nodes can and do advertise their ipv6 addresses
<spikebike>
but if I do a manual swarm connect it should work?
<whyrusleeping>
you can connect manually, or they will find eachother just fine (although, i havent tested that)
<Blame>
So, you guys are really the only ones I have to ask: I just got websockets working server-side on UrDHT. I've tested them with javascript and it seems to work. Now I need to write a javascript library that wraps the websockets and does some heavy lifting (iterative lookup mostly). This is well beyond my experience in javascript. Anybody willing to
<Blame>
pair-program/advise?
<whyrusleeping>
i dont think you can just call SendReply on line 91
<whyrusleeping>
Blame: lol, javascript
<Blame>
The only bit that scares me is how to deal with all the callbacks right. I started to write it and went from 0 to callback hell in a couple of minutes.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Blame>
but when it works, I can make in-broswer applications that actually interface with a p2p network directly, rather than through proxies.
<jbenet>
Blame: sane callbacks are hard-- i'd spend some time reading successful node modules and pick up style cues there.
<jbenet>
callback hell is very real and problematic, but people can be pretty successful with conventions.
<Blame>
can you do closures in javascript?
<jbenet>
Blame: of course, every function is a closure.
<Blame>
I kinda hate javascript with a burning passion. But the goal is to make utilizing the DHT in a webapp as easy as possible, so I should make it happen.
<Blame>
node.js and in browser js seem a bit conflated. What are the differences (what nodejs code could I actually run in a browser
<jbenet>
Blame: dont hate js. js is mostly lisp dressed in C. and it's as dangerous as C/C++/Python but in very different ways.
<jbenet>
Blame: embrace it, try to learn it as you would a lisp, and just write very, very small things
<jbenet>
Blame: my conclusion is that all languages are fun and nice, and all languages suck in various ways. pick the right language for the right job.
<jbenet>
so if you want to get interop with us while we do that, maybe would be good.
<bitemyapp>
I wrote a client for Elasticsearch in Haskell
<jbenet>
nice!
<bitemyapp>
but they have no spec and the API surface area is legendary.
<bitemyapp>
so churn is a concern
<inconshr_>
bret: whyrusleeping: what's grpc using? they're doing everything over http2 streams and in Go
<bitemyapp>
if I attempt an IPFS peer in Haskell I
<bitemyapp>
would like some assurances that there'll be a spec and it'll be adhered to by "official" impls.
<substack>
Blame: what kind of DHT are you writing?
<spikebike>
weaking of web based apps. One of the IPFS demos has a javascript based video player, is that visible in IPFS space? Or something I download and add myself?
<bitemyapp>
jbenet: I'm sure you can sympathize.
notduncansmith has joined #ipfs
domanic has quit [Ping timeout: 272 seconds]
<bitemyapp>
jbenet: when do you expect the spec will be stable?
<jbenet>
bitemyapp: absolutely. that's the exact goal.
<bitemyapp>
jbenet: is it sort of a meeting point for harmonizing nodejs and golang impls?
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
but if you want a perfect spec asap, you'll have to either help me write it, or help me get space to write it. (bit overloaded with all the things)
<jbenet>
bitemyapp: yeah its a meeting point, right now go-ipfs is changing to meet what the wire spec calls for and node-ipfs is going for the wire spec directly.
<whyrusleeping>
inconshr_: oooOoo, they do have an http2 stream impl
<jbenet>
bitemyapp: the spec is being written as node-ipfs needs things and forces me to write it
<jbenet>
bitemyapp if you get the wire portion speaking with go-ipfs, that will be a great start. it's actually the roughest thing at the moment.
<lgierth>
wking: i'm sure that's something we can bring up within early access
<inconshr_>
whyrusleeping: is the API any good?
<whyrusleeping>
i'm setting up some code to play with it
<bitemyapp>
jbenet: seems reasonable.
<jbenet>
lgierth: i signed us up.
<lgierth>
cool
<lgierth>
exciting
<jbenet>
bitemyapp: so yeah, there's enough for the wire -- even if it's somewhat haphazard, and the rest is being forced out of me as we go.
<bitemyapp>
jbenet: are there any areas at high risk for churn that I should be aware of?
* jbenet
better not die today.
<bitemyapp>
sorry, I know you cannot make any guarantees.
<bitemyapp>
just trying to get a sense of things.
<jbenet>
i think there's a few things to square out in the IPNS / Routing Records stuff.
<bitemyapp>
cool, thanks.
<jbenet>
but it's so modular and there's so much abstraction of components, that nothing is likely hurt bad.
<whyrusleeping>
inconshr_: for some reason, you cant write to the streams they are using...
* spikebike
launches ipv6 node #2, hoping for some ipv6 conenctions
<bitemyapp>
jbenet: some (not all) of why I am investigating this
<bitemyapp>
jbenet: is that I want a final project for the book that ties a bunch of things together
<bitemyapp>
jbenet: IPFS fits that description.
<bitemyapp>
it may end up being too hard/complicated, but I still want to investigate.
<jbenet>
spikebike: it may or may not use the ipv6 address depending on how it connects, it may have earlier success with the ipv4 one, but it _should definitely_ work.
<jbenet>
bitemyapp: oh that would be awesome.
<spikebike>
jbenet: I'll watch for a bit and do some testing
<jbenet>
bitemyapp: there's definitely a sub-set of IPFS that would be very manageable. a subset that means "a full node" but not all the hard-core, endless p2p networking tricks.
<JasonWoof>
I'm curious about how the timing of updates (to ipns I guess) work / will work
<JasonWoof>
guess I'm thinking about browser caches, and rss feeds
<JasonWoof>
with http/dns/etc you can say how long an item can be cached, with rss you (on the client) decide how often to poll
<JasonWoof>
would be very cool to have a system where server's wouldn't have to give out some arbitrary guess as to how long before a file changes, and clients wouldn't have to poll
<JasonWoof>
pubsub is maybe the name of this
<spikebike>
generally I greatly prefer subscriptions/callbacks over polling
<spikebike>
it is however more overhead for the updater/server/publisher
<JasonWoof>
more overhead because they have to push, not just update the local DB?
<JasonWoof>
oh, guess they have to track some nodes to push to also
<spikebike>
right
<spikebike>
imagine if dns servers had to track every client that queried it and send them a new packet for every dns change
notduncansmith has joined #ipfs
<JasonWoof>
yeah, that's bad in client/server land
<spikebike>
I've seen various ways to provide hints on how often things are likely to chagne, but seems like in many cases those hits are ignored and the client polls wheneve rit wants
notduncansmith has quit [Read error: Connection reset by peer]
ianopolous2 has joined #ipfs
<JasonWoof>
but with ipfs I assume we have a much more distributed web, ie no client with more than 200 connections
<spikebike>
sorry hints not hits
<JasonWoof>
well, some, but average number of connections would be 5-100 I assume
<spikebike>
yeah, my two test ipfs nodes seem to want 38
<jbenet>
JasonWoof: take a look at the records spec. validity schemes.
<jbenet>
Warning: WIP
<JasonWoof>
:) will do
<JasonWoof>
jbenet: link? I can't find it
<JasonWoof>
spikebike: if nodes tend to have fairly stable links, it seems like it could be fairly low over head to (for a set of "watched" ipns records) keep track of which node they were retreived from, and ask that node to send updates our way
<JasonWoof>
"watched" would be a flag, (like "pinning" is) that says we want updates to trickle our way
<JasonWoof>
or maybe all ipns records stored locally are watched
<jbenet>
This doesn't yet define pubsub (which is closer to what you describe) but that's on the road
<JasonWoof>
yay
<JasonWoof>
caching is valuable, but I'm all too aware of how it causes problems. I've made my workflow of editing web pages considerably more complex so that when my clients go to the page again, they don't see stale images or css along with the updated html
<spikebike>
JasonWoof: ya, IPFS seems to trying to bridge key/value stores, traditional HTTP usage (few directories and few files), and normal file systems. It will be interesting to see how it handles normal filesystems that are HUGELY bigger yet often still have tiny files
<spikebike>
DHTs in general are horrible performance wise
atrapado has quit [Quit: Leaving]
<spikebike>
I read a paper awhile back on, er, I think it was called sloppy DHTs, slick idea. Basically you build a low latency, medium latency, and high latency DHT
<spikebike>
everyone is in the high latency, the other two layers self form/merge/split and are basically a cache
<spikebike>
thus popular content gets very high performance/low latency
inconshreveable has joined #ipfs
<whyrusleeping>
spikebike: read the ipfs paper ;)
<spikebike>
whyrusleeping: will do, I look forward to it.
<whyrusleeping>
the spec is to use a sloppy dht
<JasonWoof>
I've been doing bittorrent mostly by just adding the info hash to transmission. It takes it quite a while to grab the metadata
<spikebike>
JasonWoof: I've been playing with tracerouting to popular anycast addresses (like say google.com) and insert those into the DHT. That way it's easy to find relatively local peers.
<JasonWoof>
makes sense that DHT is slow, it's a ridiculously huge database, very distributed
<spikebike>
in many cases it's trivial to find people in the same building, same campus, same ISP.
<whyrusleeping>
they dont have to be slow though!
<spikebike>
JasonWoof: well it's just that your peers (by default) have zero geophysical or network locality.
<spikebike>
take your hash, add one, search, it might be on the other side of the planet
<spikebike>
your routing table is close in the logical sense, not the network
<spikebike>
but yeah sloppy is a cool solution to that
<JasonWoof>
sloppy dht sounds cool
<JasonWoof>
as does making local links
notduncansmith has joined #ipfs
<jbenet>
MainlineDHT is really. DSHT
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
Is really a*
<JasonWoof>
part of my interest in pubsub, is trying to make more data/etc that I want on the computer to come to me when it's ready, so it's there already when I want it. Then it doesn't matter so much how long it gets to me
<JasonWoof>
I don't like the idea of my rss reader spamming all the servers every 60 seconds or something to check for updates
<JasonWoof>
I could have it do that infrequently, but then I wouldn't see very recent updates when I look
<JasonWoof>
so I've gotta choose between waiting for it to be fresh, and having more stale entries (and waste more bandwidth of the servers)
<spikebike>
JasonWoof: well the trick with local links is allowing them to happen without the users knowing each other first. Ideally if 10 people share an uplink that they would mostly automatically find each other
<JasonWoof>
spikebike: yeah, getting more local connections makes things faster!
<spikebike>
jbenet: afaik mainline uses sloppy in a completely different way, not at all imply finding local peers
Bat`O has quit [Ping timeout: 258 seconds]
<spikebike>
jbenet: it's meant (i believe) in the sense of storing 10 values for a key, a get request might get any subset.
<JasonWoof>
sounds like to make DHT work well, you need some specialization/localization in key space, but for other things, you want connections that are on the same network/isp/continent
<JasonWoof>
I asume practical solutions do some of both
<spikebike>
JasonWoof: my approach so far is to traceroute and put each router you find into the DHT with your ip:port so that anyone else that finds that router local knows about you
<JasonWoof>
spikebike: cool
<spikebike>
it's a natural extension of the standard dht bootstrap where you insert yourself in
<JasonWoof>
oh, I had an idea earlier. I heard there is a problem with connecting to other nodes behind the same nat: many lan addresses are in the same range (ie 192.168.0.*), and that address doesn't tell you if they're in your lan or not
<whyrusleeping>
i wonder if they have heard of the 'sync.Mutex' object
<JasonWoof>
and if you just try lots of 192.168.0.* addresses, it looks like you're network scanning or whatever, and sometimes this gets you in trouble with the local router
tilgovi has quit [Ping timeout: 264 seconds]
<spikebike>
router doesn't normally see local accesses
<JasonWoof>
so I was thinking, for the closest router (if your address is in one of the private/lan ranges like 192.168.*.*) if your node should advertise the hash of the MAC address of the router too.
<spikebike>
192.168.0.1 doesn't use a router to talk to 192.168.0.2
<spikebike>
(assuming they are on router)
<spikebike>
and if not it doesn't work anyways since it's not a routable ip
<spikebike>
I'm mostly hoping ip masq/nat goes away and everyone switches to IPv6, I have it at home an dwork
<spikebike>
but if you A) have a non-routable IP address and B) share the first hop router with another peer it would make sense to try to find each other even with non-routable ips
<JasonWoof>
seems mostly useless to advertise (on DHT) that I'm at 192.168.0.2. But I could advertise that I'm at 192.168.0.2 and my router has MAC address XX:XX:XX
<spikebike>
actually
<JasonWoof>
if somebody else on my LAN sees that, they can can connect to me directly
<JasonWoof>
if your subnet matches, but you have a different router, then you know you're not on my lan, and don't try that IP
<spikebike>
actually you should just use hash(yourIP:firstHopIP)
<spikebike>
hrm
<JasonWoof>
first hop is my router?
<JasonWoof>
seemed like we needed a unique way to identify private subnets
<JasonWoof>
MAC of the router is the only way I could think of to do that
<spikebike>
whichever the first routable hop is
Bioblaze has quit [Remote host closed the connection]
<spikebike>
you might have more than one router per uplink
<JasonWoof>
ok, maybe it's not common, but you can have multiple private subnets with the same first routable hop
<spikebike>
like say a conference/unvirsity with 100s of access points
<jbenet>
spikebike what they're storing as "multiple values" is node addresses
<spikebike>
my desktop on 192.168.1.3 sees a router on 192.168.1.1, then 71.197.104.1 (comcast ISP)
<spikebike>
jbenet: right and you might get any subset
<spikebike>
jbenet: thus it's sloppy
<wking>
"first routable hop" doesn't seem well defined (say you're a few layers deep)
<Blame>
you could do somethign as simple as require private subnets be globally named
<Blame>
and you need to know teh name to join
<spikebike>
wking: first hop that's not 192.168 10.* or 172.16 or whatever they are
<JasonWoof>
can't we just use the MAC of the router to uniquely identify the private subnet(s)?
<jbenet>
spikebike: precisely. That's exactly what mainline does. And what we do, too.
<spikebike>
Blame: problem is how will multiple people agree on a name
<spikebike>
jbenet: ah, I meant sloppy in a completely different way
Bioblaze has joined #ipfs
<wking>
spikebike: but maybe I'm on 192.168.1.*, and that's inside 192.168.2.*, and that's inside 192.168.3.*.
<Blame>
we don't. we force humans to communicate with each other.
<wking>
folks in 192.168.2.* don't have to go all the way out to public IPs before heading back down into 192.168.1.*
<spikebike>
wking: then you attempt to find peers by whatever your first routable ip
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
wking: seems fine for things like retroshare which is friend to friend. Not so much for efficient network utilization for something like ipfs
<spikebike>
oops
<spikebike>
blame^
<jbenet>
spikebike: read it. Coral uses their dsht just like mainline and we do
Bat`O has joined #ipfs
<whyrusleeping>
i think he's referring to the clustering aspect of coral
<Blame>
another lazy trick is pick a random id initially, but if I discover any peers, take their id. consensus via epidemic routing. I ran some simulations using that trick that worked out ok.
<whyrusleeping>
Blame: thats hilariously smart
<Blame>
KISS
<Blame>
you get a constant race condition about who take who's id, but it eventually works out
<spikebike>
wouldn't that make DoS/Byzantine attack easier
<Blame>
ish. They could refuse to change ids. and either attempt to fragment the network or merge two networks that could only partially reach each other.
<spikebike>
jbenet: rereading that I think I'm using sloppy wrong. The self organizing of low latency nodes with loose coherency to the higher latency DHTs is what I was referring to
<Blame>
but if you are worried about DoS on a DHT you have a lot more issues to worry about.
<spikebike>
sure, just not sure sharing an ID is a win