<jbenet>
daviddias: owe you a dht spec-- am working on it, but im simplifying some stuff, so it's taking a bit. maybe let's get node-ipfs and go-ipfs talking to each other first? (am debugging some spdy stuff now)
<jbenet>
Bat`O will take a look in a bit
<Bat`O>
jbenet: thx
<daviddias>
jbenet Got it, thank you:) So the plan is first make sure that spdystream is workable and if it is work from there to a Node impl? Impl http/2-stream will be the backup plan
<daviddias>
I'll look into getting something like spdystream, taking Indunty impl
<daviddias>
I had that in mind, but then it is part of the spec that it sits on top of ssl. For example, in node-spdy, an `https` connection handler is used for the server side
<daviddias>
however, it should behave the same way on top of a TCP socket as well
<whyrusleeping>
oooh, yeah
<whyrusleeping>
we just use spdy framing on top of a tcp connection
<pjz>
is there a plan to support symbolic links via FUSE?
<whyrusleeping>
pjz: symbolic links from where to where?
<pjz>
I was thinking links from an ipns dir into the ipfs tree
<pjz>
or into the ipns tree elsewhere
<pjz>
so in my /ipns/local/ I could do like ln -s /ipns/Qm.../ whyrusleeping
<whyrusleeping>
oh, yeah.
<whyrusleeping>
i was working on a command to do that easily
<whyrusleeping>
but you can do it now by just copying the content you want to 'symlink' to from ipfs into /ipns/local/wherever
<whyrusleeping>
it wont actually store two copies
<pjz>
how does that interact with ipns?
<whyrusleeping>
through the fuse interface?
<pjz>
yeah
<pjz>
see
<pjz>
`ipns publish` is how to turn a stable name into an unstable hash, right?
<pjz>
so that the publisher can update the hash
<pjz>
I'm trying to figuer out how to link to a stable name
<daviddias>
whyrusleeping do you have a simple test case for spdystream or recommendation to test simple features? Like connect and transfer a file or something. I'm looking for something that I can know it works with a proper impl, to test this Node.js side
<whyrusleeping>
daviddias: i'll push my branch
atrapado has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping created feat/spdystream (+1 new commit): http://git.io/vLLk3
<ipfsbot>
go-ipfs/feat/spdystream 4031457 Jeromy: swap transport over to spdystream...
<whyrusleeping>
okay, make sure you have jbenet's spdystream code installed
<whyrusleeping>
and also his go-peerstream stuff up to date
inconshreveable has joined #ipfs
<whyrusleeping>
pjz: you can do 'ipfs name publish' to make your /ipns/<peer ID> point to a given hash
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #1376: swap transport over to spdystream (master...feat/spdystream) http://git.io/vLLIj
<daviddias>
thanks :)
patcon has joined #ipfs
Tv` has joined #ipfs
* pjz
is trying to figure out how to construct cloud service discovery on top of ipfs
inconshreveable has quit [Remote host closed the connection]
inconshreveable has joined #ipfs
therealplato has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
<ipfsbot>
[go-ipfs] wking force-pushed tk/unixfs-ls from 4232e9c to 0e49177: http://git.io/vIrva
<ipfsbot>
go-ipfs/tk/unixfs-ls 434871b W. Trevor King: core/commands/unixfs: Add 'ipfs unixfs ls ...'...
<ipfsbot>
go-ipfs/tk/unixfs-ls 663f37c W. Trevor King: core/commands/unixfs/ls: Don't recurse into chunked files...
<ipfsbot>
go-ipfs/tk/unixfs-ls 3e6905e W. Trevor King: core/commands/unixfs: Rename 'ipfs unixfs' to 'ipfs file'...
cjdmax has quit [Remote host closed the connection]
<whyrusleeping>
krl: ping
<pjz>
whyrusleeping: doesn't caching of blocks defeat your idea in that issue?
<whyrusleeping>
hrm?
<whyrusleeping>
technically, yeah. optimistic caching could cause some false positives
<pjz>
whyrusleeping: your idea for service discovery says "crafting a block unique to the service (I'll refer to this as the "Identification block"), and having all peers running the service "Provide" the block, finding peers running a given service is as easy as requesting providers for that services identification block"
<whyrusleeping>
but optimistic caching should be based on actual getblock requests, not findprovider requests
<pjz>
whyrusleeping: but won't the first peer who wants that service cache that block, thus breaking the discovery?
<whyrusleeping>
nope, they dont request the block
<whyrusleeping>
they only call findproviders on it
<pjz>
oic, they never... got it.
<pjz>
whyrusleeping: so I could do this today by 1) having a way to construct the block name given the service I want and 2) ipfs dht findprovs ?
<whyrusleeping>
yeap!
<pjz>
okay, so can you point me at the solution to 1)? ie. given content, how can I predict the block name?
<whyrusleeping>
pjz: i would just do something like: 'echo "my super fancy awesome service name" | ipfs add'
<whyrusleeping>
and that makes you a block
<pjz>
but then I have to delete it or else I bcome a provider, right?
<pjz>
(thus making others in the network see that I provide a service that I'm actually looking for)
<whyrusleeping>
pjz: ah, yeah
domanic has quit [Ping timeout: 245 seconds]
mildred has joined #ipfs
<whyrusleeping>
you can add it offline, or on a different node
<krl>
whyrusleeping: pong
<whyrusleeping>
krl: any weird voodoo magic i need to do to install the electron app?
<krl>
ah yes, should put that in the readme asap
<whyrusleeping>
lol, okay
<krl>
you need to do 'npm run package'
<krl>
and it should build for your platform
<whyrusleeping>
okay
<whyrusleeping>
it opens up a blank white screen
<whyrusleeping>
well
<whyrusleeping>
npm run start:
<whyrusleeping>
'sh: electron : command not found'
Encrypt has quit [Quit: Eating time!]
<pjz>
whyrusleeping: can I just use python-multihash on the data to get the block name?
<whyrusleeping>
pjz: oh, yeah
<whyrusleeping>
that would be the hash for a raw block
alu has quit [Changing host]
alu has joined #ipfs
<krl>
whyrusleeping: ah, it assumes globally installed electron. that's a bug as well
rht__ has quit [Quit: Connection closed for inactivity]
<jbenet>
krl: do you want to have a discussion about electron? may be useful
<krl>
i'd be up for that
<jbenet>
Ok, let's begin! \o/ sorry for delay.
<jbenet>
on the etherpad today, i added the discussions with times
<jbenet>
krl: add one for electron, can probably tweak the times
Encrypt has joined #ipfs
<whyrusleeping>
NOTE: please try and keep items on your sprint to things you're definitely going to work on, and hopefully finish
<whyrusleeping>
ideally, the sprint items are thing you *will* finish this week, but thats impractical.
<jbenet>
actually, would be ideal to only have things you will finish.
<krl>
Z = now is ~19:30 ?
<lgierth>
yeah Z is utc
<jbenet>
feel free to add a column for GMT+2 if it makes it easier
<whyrusleeping>
starting things at 2100?
<lgierth>
i'd be happy to do infrastructure earlier if possible
<whyrusleeping>
why so late?
<lgierth>
friends' birthday tonight :]
<jbenet>
whyrusleeping: 21:00Z
<jbenet>
which is 14:00PDT
<whyrusleeping>
i know, thats 2pm here
<jbenet>
lgierth sure, swap with node-ipfs ?
<daviddias>
I'm good with switching :)
<lgierth>
jbenet: cool +
tilgovi has joined #ipfs
<jbenet>
(ok next week i'll have these times days earlier so people can fix things up)
<jbenet>
I'll now go through people listed last week to check in with stuff
<krl>
i think apps on ipfs and electron might go together?
<jbenet>
krl: sure consolidate
<jbenet>
Starting from the top, but if you're not here then, please check in later today
<jbenet>
actually, let's try something easier -- quick roll-call. if you're listed on the sprint and are here say hi
<jbenet>
hi
<krl>
hi
<lgierth>
hi
<jbenet>
daviddias whyrusleeping cryptix chriscool harlantwood dPow wking kbala gatesvp rht tperson are you here?
<jbenet>
(async meetings are always interesting :) )
<krl>
i'
* whyrusleeping
hides under a rock
<jbenet>
whyrusleeping: ok let's start with you :)
<whyrusleeping>
haha, okay
<whyrusleeping>
last week, i wrote multistream in go, and a netcat tool
<whyrusleeping>
i added a few new calls to the ipfs-shell package
<whyrusleeping>
debugged the deadlock in spdystream
<whyrusleeping>
and mars was up for *eight* days
<jbenet>
also fixed a memleak in bitswap
<whyrusleeping>
oh yeah, forgot about that
<jbenet>
how'd the package tool go?
<daviddias>
I'm here but on the phone, getting to a proper keyboard in some mins :)
<whyrusleeping>
it works pretty nicely
<whyrusleeping>
hashes in package imports bother me
<whyrusleeping>
but other than that, it does what i expect it to
<whyrusleeping>
this week, i'm going to work on the protocol transition
<whyrusleeping>
get go-ipfs speaking multistream
<whyrusleeping>
and also finally get eventlog dead
<whyrusleeping>
ish
<whyrusleeping>
i discovered it was allocating 10-20MB of memory per second
<jbenet>
there's an issue somewhere describing the "putting it on an http route" idea
<whyrusleeping>
yeah, that shouldnt be *too* difficult
<whyrusleeping>
but we will see
<jbenet>
i can help with the multistream stuff as i wrote the host/muxer stuff
chriscool has joined #ipfs
<whyrusleeping>
mmkay, it looks like its as simple as swapping out p2p/protocol/mux with the multistream mux
<whyrusleeping>
but looks are deceiving
<jbenet>
yeah we also need to put it first before the stream muxer. i can help with that
<whyrusleeping>
oooh, yeah
<jbenet>
think you can take on the iptables filtering stuff?
<whyrusleeping>
okay
<whyrusleeping>
yeah, i can do that
<jbenet>
that would be a big deal for people.
<jbenet>
ok great
<whyrusleeping>
i started writing a multiaddr 'ip mask' a little while ago
<whyrusleeping>
and got sidetraked
<jbenet>
kbala may need some help this week with bitswap sim stuff too
<whyrusleeping>
okay
<jbenet>
ok, next up
<jbenet>
lgierth ?
* krl
?
<jbenet>
i see the swap to ipfs daemon is completed! +9001.
<lgierth>
was all over the gateway infrastructure. nginx now prefers the local gateway, so the gateway response times are a lot more stable now
<lgierth>
killed bootstrapd on the gateways, then lost the code, then killed it again yesterday
<jbenet>
great!
<lgierth>
whyrusleeping: let me know when mars has that too, then we can remove cmds/ipfs_bootstrapd
<jbenet>
i think mars has been the daemon for a while -- right whyrusleeping ?
<whyrusleeping>
yeap, mars has always run just the daemon
<jbenet>
i think it's safe to kill cmds/ipfs_bootstrapd
<whyrusleeping>
she ma bae
<lgierth>
great, then i'll just kill it
* daviddias
arrives
<lgierth>
haven't touched monitoring yet, cjdns didn't see progress either. slowish week, ansible is great and a pain in the ass at the same time
<lgierth>
the feedback RTT is...
<whyrusleeping>
lgierth: we are going to have "work hours"
<lgierth>
it's easy to get distracted reading about golang, and reading in the go-ipfs codebase
<whyrusleeping>
i want to have a set time period where everyone is online and able to answer questions
<lgierth>
oh i mean ansible's feedback RTT :)
<whyrusleeping>
oooooh, okay
<lgierth>
write, test, write, test
<whyrusleeping>
lol
<jbenet>
(on the work hours, whyrusleeping want to open an issue describing the idea + possible times on ipfs/community ? its likely some people cannot make them, but i'm down to try it out. overlap is good)
<whyrusleeping>
jbenet: sure thing
<jbenet>
lgierth: ok re: cjdns and monitoring
<lgierth>
i wanna continue with monitoring and cjdns this week, unless there's other stuff
<jbenet>
will want to chat more about cjdns later this week or next -- having it as a transport will be really nice and may make some nat traversal easier.
<lgierth>
yeah!
<jbenet>
lgierth sounds good! monitoring i think will be very useful, so let's get that good. i can help design metrics am interested in, and can also help with dashboard things
<jbenet>
let's maybe get the data pouring out first, and then hook it up to vis
<lgierth>
yep regarding metrics you folks now best what's intersting
<jbenet>
also, would be nice if we can do part of it as a dashboard for the webui (or as a separate ipfs app that "gets installed")
<jbenet>
so people can monitor their node.
<krl>
webui should move to just be a shell for switching between apps imo
<lgierth>
let's continue monitoring at 21:00Z in the hangout
<jbenet>
(krl: i think so too, let's discuss in hangout)
<jbenet>
sounds good.
<jbenet>
ok, krl
* whyrusleeping
goes to get some pad see ew
<krl>
yeah, i had an unfortunate week with a disk-crash eating some days
<jbenet>
:( that sucks
wedowmaker has joined #ipfs
<krl>
but managed to get the html menus working in gnome
<jbenet>
krl, didnt have anything listed last week. but you worked on some electron stuff
<krl>
yeah, i still had the tasks from prior sprint, should have subdivided more
<krl>
but finishing up something in the style of your mockups
<jbenet>
sounds good
<krl>
transparency should be possible on osx, probably not in linux
williamcotton has joined #ipfs
<krl>
at least not without disabling webgl, it seems
<jbenet>
it's minor too, we may be able to get something else to look good
<krl>
yeah
<krl>
i would like to spec out a bit how a starlog structure could work
<jbenet>
btw happy to report that electron is the way i run ipfs on my osx machine now
<krl>
ah cool!
<krl>
also thougth about making randomized ports the default
<krl>
for the electron one
<jbenet>
hmm mayb-- some people will have to fwd firewall ports and so on, and that would add complexity. could be a checkbox or something. lets discuss on the hangout
<krl>
yeah, just think ppl who forward firewalls might want to run the deamon directly
<krl>
but yeah, for laters
<jbenet>
sounds good
<jbenet>
anything else?
<krl>
also have some stuff for ipfsd-ctl that i need to wrap up and release
<krl>
but that's tied to the html-menu branch, so i'd want to push them at the same time
<jbenet>
sounds good.
<jbenet>
ok anything else?
<krl>
that's about it
<jbenet>
ok, next up-- daviddias
<daviddias>
woop woop :)
<mmuller_>
e7CY/Jor
<daviddias>
So, PDD (Protocol Driven Development framework/idea) is in (I believe) a pretty presentable state, designed tests for multistream using it https://github.com/ipfs/specs/pull/12
<daviddias>
as far as for implementing the DHT in Node, that had to wait a bit in favour of getting a spdystream in Node.js first, so we get fully interoperability between go and node
<lgierth>
(cool stuff!)
<jbenet>
yeah-- want to get over the hump of this stream muxing thing and be done with it.
<daviddias>
I've been mostly spelunking through node-spdy and checking how should I strip out that, but well, it is complex enough that I have to go deeper and spend some more time figuring out how to get that done
<daviddias>
Right now probably what I will do is apply PDD to this as well
<jbenet>
daviddias makes sense. can probably ask indutny for help too, he hangs out at #io.js
<daviddias>
use docker/spdystream as the implementation to strip out tests, and build node-spdystream piece by piece, looking at Indutny's code and testing along the way
<daviddias>
this is pretty much what I have for this sprint :)
<jbenet>
davididas: will have the dht spec for you shortly, too. there's an interesting simplification that came up i want to run by whyrusleeping (and you if you're around) at the protocol/specs chat today
<jbenet>
but i'll write it up anyway
<daviddias>
I can be :)
<jbenet>
ok sounds good! +1 to PDD. will be nice to test our protocols that way. maybe when we break apart go-ipfs into separate repos we can add PDD tests
<jbenet>
ok next
<daviddias>
That sounds like an awesome idea. Maybe we can take a bit of the node-ipfs discussion today to list what should be the first components?
<jbenet>
davididas: sounds good
<jbenet>
i saw chriscool came in, want to check in?
bananabas has joined #ipfs
<chriscool>
Hi everyone!
<jbenet>
hello o/ :)
bananabas has left #ipfs ["Leaving"]
<chriscool>
Yeah ok for checkin
<chriscool>
\o
<chriscool>
Now sharness has test_seq and test_pause
<chriscool>
and it is easy to update sharness as the install-sharness.sh script is improved
<jbenet>
great!
<chriscool>
test_pause can be useful to debug test scripts
mildred has joined #ipfs
<jbenet>
you also added gitcop, which is useful to make sure code is properly signed off
<chriscool>
yeah
<jbenet>
sounds great
<chriscool>
there are still a few issues around GitCop
<jbenet>
did you get a chance to take a look at the circleci pathing issue?
<chriscool>
I looked at it a little bit but didn't see anything right away
<mafintosh>
jbenet: can you link me the bitswap paper?
<chriscool>
I need to try some things on circleci to properly debug
<chriscool>
no I will improve the docs around GitCop and then take care of circleci
<mafintosh>
jbenet: sweet thanks
<jbenet>
chriscool: sounds good, thanks!
<jbenet>
ok, are any of wking kbala krl tperson around?
<jbenet>
errr sorry not krl there :)
<jbenet>
ok, assuming not. check in later
<ipfsbot>
[go-ipfs] lgierth opened pull request #1377: cmd: remove dead ipfs_routingd and ipfs_bootstrapd (master...remove-bootstrapd-and-routingd) http://git.io/vLtMQ
<jbenet>
whyrusleeping: you mean because they merged my PR ?
<whyrusleeping>
yeah
<rtlong>
hey guys, just noticed there's a 0.3.5 release tagged and in the changelog, but the version const wasn't updated yet, so ipfs still reports as 0.3.4. Was this just overlooked or is the release not ready just yet?
<whyrusleeping>
unless you want to just keep go-peerstream based on your fork
<jbenet>
rtlong: damn, that was overlooked.
<whyrusleeping>
rtlong: crap. thanks for pointing that out.
<jbenet>
rtlong: thank you
<rtlong>
oh good. cool
<jbenet>
whyrusleeping: want to push that out? am about to jump into infra chat
<ipfsbot>
go-ipfs/feat/spdystream 8bbde22 Jeromy: swap transport over to spdystream...
hellertime has quit [Quit: Leaving.]
patcon has quit [Ping timeout: 256 seconds]
<vijayee_>
@jbenet I've gotten through creating the menu system and instruction display for ipfs tour tool (https://github.com/vijayee/tourguide) and I'm working on the test runner that verifies each step is complete. Did you have an idea how you wanted to structure the tests for the commands?
Encrypt has quit [Quit: Sleeping time!]
<jbenet>
vijayee_ very cool!
<jbenet>
vijayee_ i think either we can diff a single file, or maybe diff a directory
<whyrusleeping>
jbenet: why does that sadhack thing keep breaking?
<whyrusleeping>
its saying it cant find iptb again
<jbenet>
vijayee_ worst case, let the test writer write a script
<vijayee_>
@jbenet its just a test runner I'm just trying to see if we would know what data they are going to run the test against in advance or would it just be something they create to satisfy the command. Otherwise I'll just have to add something to the chapters data structure to say what its relevant test cases are.
domanic has quit [Ping timeout: 245 seconds]
patcon has joined #ipfs
williamcotton has quit [Ping timeout: 256 seconds]
<ipfsbot>
go-ipfs/feat/spdystream 6703d83 Jeromy: swap transport over to spdystream...
<whyrusleeping>
Luzifer: lol, youre putting bounties on our bugs?
<Luzifer>
me and someone else…
<Luzifer>
sadly this doesn't motivate someone to build a genius fix for the netscan thing…
<Luzifer>
(yet)
<whyrusleeping>
Luzifer: its on my todo list this week
Wkter has quit [Ping timeout: 252 seconds]
notduncansmith has joined #ipfs
<Luzifer>
that bug caused me to move my ipfs node to my private internet connection instead of my server because my provider threatens me to shut down that server so I want it gone.
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
Luzifer: did the iptables rules not work?
<Luzifer>
hope you find a good solution
Wkter has joined #ipfs
<Luzifer>
the iptables rules messes with dockers rules… all automatic iptables things are killing dockers rules, putting them up manually causes new thread mails from my provider as the netscan is active for $time_until_monkey…
<whyrusleeping>
gotit
<Luzifer>
killing docker rules is killing the whole docker host…
<bret>
the ipfs daemon webapp only listens on localhost right?
<whyrusleeping>
bret: yes
<bret>
k coo
<bret>
go getting ipfs right now on a raspi2
<spikebike>
oh, I have a Pi I could test on as well
<Luzifer>
wow. 1am… should go to bed… have a meeting at 11… (-_-")
<spikebike>
doubly so if we turn on ipv6 by default
jj| has left #ipfs [#ipfs]
<Luzifer>
its working quite nice on a pi2… had it running on one before I needed the hardware for another project
<bret>
rad works perfectlky
<spikebike>
nice
<spikebike>
seems like 10-15 IPFS nodes are running ipv6
<spikebike>
not bad since it's off by default
<bret>
my raspi2 id QmcfKasKvHnYoBU1kTcmM1uXJoTg17ENxQpuRCRuNnHs2g
<ipfsbot>
[go-ipfs] whyrusleeping merged master into dev: http://git.io/vLq6U
<bret>
spikebike: how did you turn on ipv6 ipfs again?
<spikebike>
add "/ip6/::/tcp/4001" under addresses.sarm in ~/.ipfs/config
Wkter has quit [Ping timeout: 265 seconds]
<spikebike>
there's an open tick and an open pull request on it
<spikebike>
if you are using private addresses that are temporary I recommend putting your permanent address on it
<bret>
spikebike: i just have an airport extreme handing out comcast ipv6 in native mode
<bret>
will that work?
<spikebike>
ya, probably
<spikebike>
I have a /60 from comcast and it "just works"
<spikebike>
seems like bootstrap fails some times
<bret>
is ipv6 'easier' than ipv4?
<spikebike>
if you get zero ip6 peers let me know and I'll send ya the command to force one
<bret>
there is no nat right?
patcon has quit [Ping timeout: 246 seconds]
<bret>
spikebike: whats the best way too check?
<spikebike>
bret: yeah, generally ipv6 is better/easier for p2p
<spikebike>
PING facebook.com(edge-star6-shv-12-frc3.facebook.com) 56 data bytes
<spikebike>
64 bytes from edge-star6-shv-12-frc3.facebook.com: icmp_seq=1 ttl=47 time=98.0 ms
<bret>
had that working yesterday, didn't do anything interesting
<bret>
moved the raspi this morning.. something must of not configured right
<spikebike>
I get a /60 from comcast and use radvd to serve out a /64 per interface on my router
<bret>
¯\_(ツ)_/¯ im rockin auto mode
<bret>
aka sometimes works
<bret>
k gonna noob this up and try a reboot
Wkter has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
kbala around?
<kbala>
jbenet: yup
<jbenet>
want to (a) do spring check in and (b) chat about bitswap ml? (am free later too)
<jbenet>
for (a), did you get through "get basic bitswap simulator working https://github.com/ipfs/bitswap-ml/issues/3" and "look into simulating latency and bandwidth caps" ?