jbenet changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- Code of Conduct: https://github.com/ipfs/community/blob/master/code-of-conduct.md -- Sprints: https://github.com/ipfs/pm/ -- Community Info: https://github.com/ipfs/community/ -- FAQ: https://github.com/ipfs/faq -- Support: https://github.com/ipfs/support
Encrypt has quit [Quit: Sleeping time!]
hashcore has quit [Ping timeout: 246 seconds]
hashcore has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
r04r is now known as zz_r04r
NightRa has quit [Quit: Connection closed for inactivity]
hashcore has quit [Ping timeout: 245 seconds]
NeoTeo has quit [Quit: ZZZzzz…]
elgruntox has quit [Quit: leaving]
Tera2342 has joined #ipfs
jhiesey has quit [Read error: Connection reset by peer]
jhiesey has joined #ipfs
computerfreak has quit [Quit: Leaving.]
kvda has joined #ipfs
reit has joined #ipfs
corvinux has quit [Quit: IRC for Sailfish 0.9]
zugz has joined #ipfs
zugzwanged has quit [Ping timeout: 260 seconds]
border0464 has joined #ipfs
cemerick has joined #ipfs
strongest_cup has joined #ipfs
reit has quit [Read error: Connection reset by peer]
micxjo has joined #ipfs
cemerick has quit [Ping timeout: 260 seconds]
Qwertie has quit [Ping timeout: 260 seconds]
jamie_k has joined #ipfs
Not_ has joined #ipfs
evanmccarter has joined #ipfs
Not_ has quit [Quit: Leaving]
Not_ has joined #ipfs
Qwertie has joined #ipfs
jo_mo has quit [Quit: jo_mo]
devbug has quit [Ping timeout: 260 seconds]
leer10 has joined #ipfs
devbug has joined #ipfs
jamie_k has quit [Quit: jamie_k]
cemerick has joined #ipfs
simonv3 has quit [Quit: Connection closed for inactivity]
cemerick has quit [Ping timeout: 245 seconds]
computerfreak has joined #ipfs
computerfreak has quit [Remote host closed the connection]
patcon has quit [Ping timeout: 245 seconds]
dpc_ has joined #ipfs
<dpc_> Is `ipfs publish` `ipfs resolve` known not to work or is it only me?
<dpc_> Ah. It worked finally.
reit has joined #ipfs
ygrek_ has joined #ipfs
<achin> i've been having problems, too
<achin> (mostly with publish)
M-eternaleye has joined #ipfs
M-eternaleye has quit [Changing host]
M-eternaleye is now known as eternaleye
speewave has joined #ipfs
bbfc has left #ipfs [#ipfs]
<jbenet> whyrusleeping: i think travis is choking on bitswap tests for fast add stuff-- what might cause this?
<jbenet> nvm was just taking a long time
ygrek_ has quit [Ping timeout: 246 seconds]
jamie_k_ has joined #ipfs
<ipfsbot> [go-ipfs] jbenet closed pull request #1961: initial support for commands to use external binaries (master...feat/external-bins) http://git.io/v8dJZ
grahamperrin has joined #ipfs
pfraze has quit [Remote host closed the connection]
dpc_ has quit [Remote host closed the connection]
dpc_ has joined #ipfs
border0464 has quit [Ping timeout: 246 seconds]
border0464 has joined #ipfs
Not_ has quit [Ping timeout: 246 seconds]
grahamperrin has quit [Quit: Leaving]
grahamperrin has joined #ipfs
Matoro has joined #ipfs
ogzy has quit [Ping timeout: 260 seconds]
Tera2342 has quit [Ping timeout: 260 seconds]
dpc_ has quit [Ping timeout: 245 seconds]
dpc_ has joined #ipfs
hjoest has quit [Ping timeout: 246 seconds]
hjoest has joined #ipfs
devbug has quit [Ping timeout: 246 seconds]
patcon has joined #ipfs
reit has quit [Ping timeout: 245 seconds]
dpc_ has quit [Quit: Leaving]
ulrichard has joined #ipfs
reit has joined #ipfs
jamie_k_ has quit [Quit: jamie_k_]
jamie_k_ has joined #ipfs
fass has quit [Ping timeout: 272 seconds]
jamie_k_ has quit [Client Quit]
micxjo has quit [Remote host closed the connection]
sbruce_ has quit [Read error: Connection reset by peer]
sbruce_ has joined #ipfs
SebastianCB has joined #ipfs
NightRa has joined #ipfs
e-lima has joined #ipfs
devbug has joined #ipfs
hjoest has quit [Ping timeout: 246 seconds]
hjoest has joined #ipfs
devbug has quit [Ping timeout: 260 seconds]
devbug has joined #ipfs
Guest53989 has joined #ipfs
grahamperrin has quit [Remote host closed the connection]
evanmccarter has quit [Quit: Connection closed for inactivity]
Tera2342 has joined #ipfs
<jbenet> careful daviddias and ehmry: you've got competition! https://bbs.archlinux.org/viewtopic.php?pid=1584172#p1584172
drwasho has joined #ipfs
<drwasho> hey folks
<jbenet> hey drwasho
<daviddias> sweet!
<drwasho> hey mate
<drwasho> how are things?
<jbenet> drwasho good! going through some CR -- you?
<jbenet> whyrusleeping archlinux link above o/
SebastianCB has quit [Ping timeout: 260 seconds]
strongest_cup has quit [Ping timeout: 260 seconds]
<drwasho> Crazy as usual!
<drwasho> On the good news front: encrypted chat over WS is working really well
<drwasho> Purchasing is nearly done
<drwasho> Last big thing is ratings/reputation data
<drwasho> Nothing big or anything :P
rombou has joined #ipfs
<jbenet> hahah nice. not important at all!
<drwasho> Damn, dinner time, g2g
<drwasho> Catch up later, and we should definitely schedule a chat for this week
<jbenet> sounds good!
drwasho has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
strongest_cup has joined #ipfs
<noffle> jbenet: any update on ipfs meetup in bay area / sf?
<jbenet> noffle: not yet have had zero time to think about it and looks like i need to take off this wed/thu.
<jbenet> noffle: i'd be free to lead one tue or wed night, but seems way too short notice to rally people together.
speewave has quit [Quit: Going offline, see ya! (www.adiirc.com)]
slothbag has joined #ipfs
s_kunk has quit [Ping timeout: 240 seconds]
rombou has quit [Ping timeout: 260 seconds]
strongest_cup has quit [Ping timeout: 245 seconds]
strongest_cup has joined #ipfs
zz_r04r is now known as r04r
border0464 has quit [Ping timeout: 245 seconds]
border0464 has joined #ipfs
leer10 has quit [Ping timeout: 246 seconds]
rombou has joined #ipfs
<haadcode> morning o/
<M-mubot> Good 'aye!, @freenode_haadcode:matrix.org
<zignig> hey all
Tera2342 has quit [Ping timeout: 260 seconds]
dignifiedquire has joined #ipfs
dignifiedquire has quit [Client Quit]
<davidar> Hmm, I should fix how mubot names people
CarlWeathers has quit [Ping timeout: 245 seconds]
CarlWeathers has joined #ipfs
<zignig> davidar: o/
<davidar> zignig: \o
<zignig> got some leave so I'm cacluating my programming time at the moment.
<zignig> I'm looking into getting astralboot to run up the ipfs build and CI infrastructure.
<davidar> zignig: write a program to do it for you :p
<davidar> cool
<zignig> I think that some serious DOGFOODING is needed.
<zignig> also be able to build the build infrastructure from a single hash is nice and recusive.
<davidar> indeed
<zignig> *recursive
<zignig> Tex.js is looking nice , is there any way to cache it more?
<davidar> yeah, i haven't put too much effort into optimising it yet
<davidar> mostly waiting for the arxiv corpus to be html-ified so we have something to test it on
<davidar> premature optimisation, etc
<zignig> indeed. I been slurping the xml def file but have not done much with it.
dignifiedquire has joined #ipfs
dignifiedquire has quit [Client Quit]
<davidar> should be able to do even more cool stuff once the article contents itself is in xml too ;)
rendar has joined #ipfs
RJ2 has quit [Read error: Connection reset by peer]
RJ2 has joined #ipfs
rombou has quit [Quit: Leaving.]
rombou has joined #ipfs
rjeli has quit [Ping timeout: 264 seconds]
CarlWeathers has quit [Read error: Connection reset by peer]
CarlWeathers has joined #ipfs
CarlWeathers has quit [Read error: Connection reset by peer]
rombou has quit [Ping timeout: 246 seconds]
rjeli has joined #ipfs
CarlWeathers has joined #ipfs
guybrush_ is now known as guybrush
guybrush has quit [Changing host]
guybrush has joined #ipfs
sivachandran has joined #ipfs
<sivachandran> Hi, I am trying to find how well IPFS utilises the network bandwidth. On a 500Mbps communication channel I see IPFS capped at 50Mbps whereas scp able to utilise the max bandwidth. Is there a known limitation in IPFS related to network bandwidth utilisation?
s_kunk has joined #ipfs
s_kunk_ has joined #ipfs
<jbenet> sivachandran: there shouldn't be. i get higher bw utilization-- can you tell me more about the use case here?
s_kunk has quit [Disconnected by services]
<jbenet> (what machines?)
CarlWeathers has quit [Ping timeout: 260 seconds]
s_kunk_ is now known as s_kunk
s_kunk has quit [Changing host]
s_kunk has joined #ipfs
<sivachandran> I trying on two AWS EC2 instances(m4.large type) within same region. iperf reporting 545Mbps between the machines. I am getting almost same bandwidth utilisation when I copy through scp.
<sivachandran> I initialised ipfs on both instances. Added a large file in one ipfs node and tried ipfs cat of the added file hash in another ipfs node.
<sivachandran> Measured the time using 'time'.
<jbenet> sivachandran: the short of it is that there's a ton we need to optimize still -- but what we should do is figure out where go-ipfs is getting stuck in your case.
<sivachandran> Ok. Are there any parameters that I can experiment? Or is there a place in go-ipfs I can look at it?
<jbenet> sivachandran: some possibilities: (a) right now bitswap is not able to express full dag path queries, so it may lose some time getting directories in a hierarchy. in practice it's not too bad, but it is inefficient. (b) there's hashing involved, so some of this _may_ be waiting for cpu and poorly utilizing the bw in the meantime (higher concurrency in
<jbenet> certain areas would fix this easily). (c) bitswap also has a constant limit of blocks being downloaded from a peer. in practice this should be fine, but there may be a problem at play.
NightRa has quit [Quit: Connection closed for inactivity]
<jbenet> sivachandran: could you get me a profile of the bandwidth utilization of the machine? (sorry we should make it easier to get these with ipfs itself, but not there yet)
rombou has joined #ipfs
<sivachandran> jbenet: I will try.
bergie has quit [Ping timeout: 264 seconds]
bergie has joined #ipfs
devbug has quit [Ping timeout: 264 seconds]
<jbenet> sivachandran: looking at the pattern will help me diagnose
dignifiedquire has joined #ipfs
infinity0 has quit [Remote host closed the connection]
<jbenet> sivachandran: check out http://www.ntop.org/products/traffic-analysis/ntop/ and http://martybugs.net/linux/rrdtool/traffic.cgi -- having some graphs of bw/s would be great
pokeball99 has quit [Ping timeout: 264 seconds]
jo_mo has joined #ipfs
pokeball99 has joined #ipfs
hjoest has quit [Ping timeout: 246 seconds]
<sivachandran> jbenet: I will try to capture it, thanks.
dignifiedquire has quit [Ping timeout: 260 seconds]
dignifiedquire has joined #ipfs
hjoest has joined #ipfs
sivachandran has quit [Remote host closed the connection]
infinity0 has joined #ipfs
e-lima has quit [Ping timeout: 246 seconds]
Encrypt has joined #ipfs
echo_oddly has quit [Quit: No Ping reply in 180 seconds.]
jo_mo has quit [Quit: jo_mo]
trn has quit [Quit: quit]
CarlWeathers has joined #ipfs
trn has joined #ipfs
e-lima has joined #ipfs
echo_oddly has joined #ipfs
trn has quit [Excess Flood]
trn has joined #ipfs
dignifiedquire_ has joined #ipfs
pjz has quit [Ping timeout: 260 seconds]
trn has quit [Excess Flood]
pjz has joined #ipfs
echo_oddly has quit [Quit: No Ping reply in 180 seconds.]
dignifiedquire has quit [Ping timeout: 260 seconds]
dignifiedquire_ is now known as dignifiedquire
trn has joined #ipfs
jo_mo has joined #ipfs
echo_oddly has joined #ipfs
<ipfsbot> [webui] greenkeeperio-bot opened pull request #117: Update i18next-client to version 1.11.2
echo_oddly has quit [Quit: No Ping reply in 180 seconds.]
computerfreak has joined #ipfs
echo_oddly has joined #ipfs
NeoTeo has joined #ipfs
sivachandran has joined #ipfs
sivachandran has quit [Remote host closed the connection]
sivachandran has joined #ipfs
strongest_cup has quit [Ping timeout: 260 seconds]
jo_mo has quit [Quit: jo_mo]
rombou has quit [Ping timeout: 246 seconds]
srenatus has joined #ipfs
sasha- is now known as sasha
strongest_cup has joined #ipfs
martinkl_ has joined #ipfs
martinkl_ has quit [Max SendQ exceeded]
martinkl_ has joined #ipfs
hjoest has quit [Ping timeout: 246 seconds]
hjoest has joined #ipfs
<jbenet> cloutier: the random feature is so addictive.
<mafintosh> jbenet: haha
<mafintosh> jbenet: thats awesome
<jbenet> mafintosh: did you add it or did someone else?
slothbag has quit [Quit: Leaving.]
strongest_cup has quit [Ping timeout: 245 seconds]
hellertime has joined #ipfs
<computerfreak> guys, are there already binaries for version 4 ?
<computerfreak> i never managed to compile ipfs xD
<computerfreak> too much go troubles
<jbenet> warning: the repo migration not in there yet, so use a different repo
<jbenet> and the network handshake is different, so you cant talk to <0.4.0 versions.
<achin> what does the upgrade path look like, then? (how do i serve content to both 0.4.0 nodes and 0.3.x nodes?)
<jbenet> achin: we're still deciding that. may want to voice concerns + desires in an issue in go-ipfs
<jbenet> (our goal is to get everyone ready to jump to 0.4.0 together. ofc, not everyone will, so we want to make this as smooth as we can)
<achin> sure thing. is there an already open issue?
<jbenet> dont recall.
<achin> k
<achin> i'll ponder
<mafintosh> jbenet: it wasnt me :)
<jbenet> mafintosh: **shrug**
sivachandran has quit [Quit: Leaving...]
<pepesza> computerfreak: try using gvm for managing go versions. Worked wonders for me.
patcon has quit [Ping timeout: 260 seconds]
<computerfreak> which version is ipfs.io running? because i use that to publish my demo-website to the internet ..
<computerfreak> pepesza: thx, will will try that
<computerfreak> so its not recomendet to update right now?
robcat has joined #ipfs
robcat has left #ipfs [#ipfs]
robcat has joined #ipfs
Encrypt has quit [Quit: Eating time!]
infinity0 has quit [Ping timeout: 246 seconds]
yaoe has quit [Ping timeout: 246 seconds]
<robcat> Hi, I'm trying to write an adapter for pacman (the Arch Linux package manager) to download the packages from ipfs directly using their hash
<robcat> in the package database the sha256sums are available for each package, and my challenge is to convert them into base58 encoded multihashes
<robcat> I used go-multihash to do the reverse operation (convert the b58hash that results from `ipfs add` and extracting the sha256 hash, but it doesn't match with the output of sha256sum
<robcat> Here's my script: https://play.golang.org/p/x6-S2Y_zZd (the hashes don't match). Can you tell what I'm missing?
hellertime has quit [Quit: Leaving.]
rombou has joined #ipfs
strongest_cup has joined #ipfs
yaoe has joined #ipfs
Nitori has quit [Write error: Connection reset by peer]
Nitori has joined #ipfs
<cryptix> robcat: do you know about the chunking involved when reading data into ipfs?
Encrypt has joined #ipfs
e-lima has quit [Ping timeout: 245 seconds]
<cryptix> robcat: id love to see that happen for a bunch of projects but we need an index of full file hash to possible ipfs hashes
<cryptix> like bittorrent, ipfs splits up files and then constructs a merkle tree/dag from parts of the file
<robcat> cryptix: about the chunking, I'm using the ipfs default one
<robcat> size-64
<cryptix> right now there are three layouts for this, balanced (fixed size blocks) trickle (optimized for streaming) and rabin (totally over the top of my hat, rolling fingerprint something something.. :))
<robcat> I'm using `ipfs add -s size-64 foo` (fixed 512-byte blocks)
<robcat> shouldn't be the one used by sha256sum too?
jonl has quit [Ping timeout: 264 seconds]
<cryptix> hrm.. not sure about how the sha256sum tool is implemented. i'd guess if it did chunking, the hashes would get longer (like bt infohash)
jonl has joined #ipfs
<robcat> ok, maybe for big files chunking may be a problem...
<robcat> but I'm trying to make it work with a 4-byte file (i.e. "foo\n")
<cryptix> looking at your code now
<cryptix> oh well
<cryptix> wait
<achin> if you have the hash of a file (say, from sha256sum), you can't really automatially generate an ipfs hash without going through the "ipfs add" code
<cryptix> ipfs also wraps data and link objects
<cryptix> with protobuf etc
<cryptix> achin: yea, i know - but an index of .md5 .sha1 to known ipfs hashes would be really handy
<cryptix> for pkg managers and archive lookups alike
<achin> in general, there is not a one-to-one mapping between a file and an ipfs hash
<achin> since a single file can have, in theory, many different IPFS hashes
<jbenet> depending on the dag used to represent it -- the layout
<robcat> yes, having many hashes is not a problem (I just need one)
<robcat> and I thought that a simple files didn't need to be wrapped in a larger structure before hashing
<jbenet> robcat: we've discussed adding the _raw data hash_ to attributes of a file, and using a tree-mode hash like blake2 for that
<cryptix> achin: i think that can be improved - i wanted to chime in on that last sprint with some questions about this topic wrt s3 hosting for instance.. would be nice to batch request parts of a dag layout
<achin> (sorry all, in a phone call. be right back)
<jbenet> robcat: files get wrapped because chunking needs to happen and because of dag formating (we've also discussed keeping raw data addressable, too. so you could do that with small files (<256k)
<cryptix> robcat: see the plumbing commands 'ipfs object get/put' 'ipfs block get/put'
<jbenet> robcat would this make a huge difference here?
<jbenet> robcat: btw, the way that we're doing this for other package managers, like npm
felixn has quit [Ping timeout: 265 seconds]
dreamdust has quit [Ping timeout: 265 seconds]
<robcat> jbenet: ok, now I get why the wrapping is necessary
<jbenet> robcat: is to keep a "pkgmgrname -> ipfs hash" map, in an object
<jbenet> robcat: we could similarly do "pacman hash -> ipfs hash" in an ipfs object
<jbenet> but doesnt pacman have package names? may want to use that
<robcat> jbenet: yes sure, pacman already looks up the package info from the package database
<robcat> and the package database already contains various hashes
<robcat> the "IPFS hash" could be added quite easily
<jbenet> robcat: you could (a) put all that info in ipfs itself, and use ipfs as the package database, (b) just look up by name twice, and match the hashes (the object in ipfs land could list the pacman hash too)
<jbenet> robcat maybe start with (b) and can work toward (a)
<cryptix> id still like to see an index of known md5/sha1/sha256 to a known ipfs hash
<jbenet> robcat: just made https://github.com/ipfs/notes/issues/84 -- feel free to post whatever there
<cryptix> had a lot of occurences where PKGBUILD failed because ftp was unreachable for instance
<jbenet> i'll track it and answer as you need
<cryptix> and although i'd love to see everything on ipfs, it will take some time to get there...
ogzy has joined #ipfs
pod has joined #ipfs
cemerick has joined #ipfs
<achin> cryptix (et al), my point was simply, you can't just take the hash of a file and convert it into an ipfs hash. you have to actually add the file to ipfs and then record it (perhaps in IPFS itself!)
<cryptix> achin: +1 with you on that one :)
<ipfsbot> [webui] digital-dreamer opened pull request #118: German localization (master...l20n-de) http://git.io/vRKSf
thelinuxkid has joined #ipfs
<jbenet> achin: indeed.
<jbenet> achin: well-- you _could_ figure out what the hash would be by doing the same thing ipfs would do-- but that's just semantics ;)
edwardk has quit [Ping timeout: 245 seconds]
<achin> but there is an important difference! the difference is between "what hash can i use to get this file" and "what hash might be used to get this file"
<jbenet> achin: more generally, what we both mean is: you cant predict what the cryptographic hash of something is until you hash it.
rombou has quit [Ping timeout: 246 seconds]
<achin> right
bergie has quit [Ping timeout: 245 seconds]
srenatus has quit [Ping timeout: 260 seconds]
<cryptix> is there news on dag visualization tools? i'd like to understand rabin better
<cryptix> how it works for different kinds of media etc
<jbenet> achin: understood-- i mean that in a way, you can treat the formatting as part of "the hash scheme" so ipfs_hash(data) = requires chunking it, forming tree, getting root, etc.
<jbenet> cryptix: would be great to make a benchmark
kumavis has quit [Ping timeout: 245 seconds]
<cryptix> yes! varying kings of block size etc..
<cryptix> (bit bummed you wont make it to 32c3 but i can totally understand - i most likly wont make it to next years if it isnt in hamburg due to similar reasons)
<achin> jbenet: i'd like to agree, but i'm worried that people will write their own tools for chunking/tree-forming that will not match the impl that's used in go-ipfs
<cryptix> achin: i dont see that as a problem until you keep to the rules
<jbenet> achin: that's why we write up specs :) -- people don't create their own versions of "sha256" or of "git data structs"
<jbenet> achin: but i hear you-- easier to deviate with ipfs.
<cryptix> dont reorder links, etc
<jbenet> achin: we should draft up all the chunking in https://github.com/ipfs/specs
<achin> i don't think specs really help, though
<achin> here's a contrived example
felixn has joined #ipfs
<jbenet> achin: specs and tests. this is how any protocols agree-- including implementations of hashes like sha256
strongest_cup has quit [Ping timeout: 245 seconds]
<achin> i have a pretty massive file that i want to add to IPFS. i happen to know that it has an internal structure that will lend itself to a certain type of chunking. this type of chunking is so specific to my file that go-ipfs can't be expected to do the optimal thing
<achin> so i'll write a tool to add this file in the optimal way (using the optimal chunking)
<jbenet> achin: nothing prevents you from reading the sha256 descriptions and rolling your own implementation that ignores details or chosen parameters and thus yields totally different hashes
kumavis has joined #ipfs
strongest_cup has joined #ipfs
speewave has joined #ipfs
<jbenet> achin: ohhh i see what you mean-- sure and that's fine.
<achin> so the hash i compute (which will obviously be correct for my given chunking) will be a totally valid hash. and it will be different (also obviously) from the hash that go-ifps computes. but the hash that go-ipfs computes is not the "more correct" hash
<jbenet> achin: and doing so you know that you won't match other graph layouts of it. if what you're worried about is having too many graphs here-- one solution is to (a) track the raw hash of the data (preferably with a nice hash for it like blake2 https://github.com/ipfs/notes/issues/83) and then (b) maintain an index of all observed "access graphs" (or indices)
<jbenet> for given data
pokeball99 has quit [Ping timeout: 264 seconds]
<jbenet> achin: there's a tangential problem here btw. suppose i give you two images, one encoded in jpg and one in png. they're the same image. sort of.
RJ2 has quit [Read error: Connection reset by peer]
anderspree has quit [Read error: Connection reset by peer]
<jbenet> achin: unix doesnt solve this and treats the two images (which are not byte-for-byte identical, though they are meant to represent the same data, and they can often represent it so well that there's 1:1 mappings)
<jbenet> treats them as "different files"
<achin> right. that seems unavoidable to me
<jbenet> this is an unavoidable problem given how information theory works. and it shows up at many layers of the "data representations" stack
<achin> backing up 1 small step, even with a single impl of go-ipfs, multiple hashes/multiple trees can be computed for the same file. all the user has to do is use different --chunker arguments
<jbenet> achin: yes, this is not a problem.
<jbenet> achin: yes, it decreases dedup. but in practice is not a big problem. and we can mitigate it as i described above.
pfraze has joined #ipfs
<achin> right. i wasn't suggesting it was a problem (in that problems are things taht need solving). just another demo of the one-to-many relationship between bytes and ipfs hashes
<achin> i do believe we are all in agreement about all of this :)
<jbenet> indeed :)
edwardk has joined #ipfs
bergie has joined #ipfs
<robcat> jbenet: I was reading the multihash specs, and I was thinking
<robcat> a special multihash prefix could be introduced for files shorter than 256 bytes
pokeball99 has joined #ipfs
<robcat> e.g. 0x00: No hash
<robcat> and in this case, the unhashed content could simply be appended to the prefix
RJ2 has joined #ipfs
<robcat> this would be useful to address simple and universal keywords
anderspree has joined #ipfs
<robcat> for example, in the semantic web (addressing a concept by keyword, no lookup required)
jonl_ has joined #ipfs
jonl has quit [Ping timeout: 264 seconds]
jonl_ is now known as jonl
jamie_k has joined #ipfs
Matoro has quit [Ping timeout: 245 seconds]
bergie has quit [Read error: Connection reset by peer]
bergie has joined #ipfs
<ion> Interesting
ekroon has quit [Ping timeout: 264 seconds]
ekroon has joined #ipfs
Guest53989 has quit [Remote host closed the connection]
devbug has joined #ipfs
srenatus has joined #ipfs
<jbenet> robcat: yeah :) it occurred to me a while back too "the identity hash"
<jbenet> robcat: yep super useful.
<achin> would there be any padding?
devbug has quit [Ping timeout: 264 seconds]
<achin> (it would be reasonable to expect a multihash to have a fixed length)
<jbenet> achin no, it is not. sha256 and sha3 have different lengths
jamie_k has quit [Quit: jamie_k]
<achin> i mean, given a prefix
<jbenet> the prefix gives the length
<achin> ok right. good point
ashark has joined #ipfs
<achin> that was some forward thinking! giving both the hash type and the hash length in the prefix
technomad has quit [Ping timeout: 245 seconds]
infinity0 has joined #ipfs
rombou has joined #ipfs
joeyh has quit [Ping timeout: 260 seconds]
dreamdust has joined #ipfs
technomad has joined #ipfs
spikebike has joined #ipfs
<achin> ( https://github.com/jbenet/multihash/issues/13 for others following along )
<jbenet> i'd really like to fix https://github.com/jbenet/multihash/issues/14
<jbenet> i dont have a good scheme yet
<jbenet> because the character needs to be in the given encoding, too
joeyh has joined #ipfs
<achin> yeah
<achin> i'm also wondering if we need a full byte to encode the encoding
<robcat> maybe the prefixes could have been assigned in the 8-bit space
<robcat> so that the first character of the base58 encoding could have been self explaining
<robcat> look for example here: https://wiki.ripple.com/Encodings
<robcat> ripple has different kind of hashes, that have a special 1-byte prefix
<robcat> and looking at the first letter of the base58 encoding, a human can immediately understand what kind of object it is
<robcat> in the multihash case, the first character of the base58 encoding could self-explain the kind of encoding
<robcat> (by the way, sorry if I'm missing already-opened issues, I just discovered ipfs)
<richardlitt> dignifiedquire you around?
NightRa has joined #ipfs
eject has joined #ipfs
rombou has left #ipfs [#ipfs]
<dignifiedquire> richardlitt: back now
<dignifiedquire> jbenet: daviddias let’s have a chat about api headers tonight, that’s probably easier to resolve the questions
amade has joined #ipfs
<daviddias> we can make it part of the agenda of go-ipfs
<jbenet> robcat right but we need a table for common base encodings: b1, b2, b8, b16, b32, b58, b64, b128, b256, ...
<jbenet> robcat: there may be one that's clear + trivial and works in the given encoding
<robcat> jbenet: ah wait, my proposal is only valid if one base is chosen as the standard one
<jbenet> like, could use '0' for b1, '1' for b2, '7' for b8, 'f' for base16, but the rest are tricky-- what do you chose for b32? b64? b128?
<jbenet> (the alphabets not always nicely ordered)
<robcat> the alphabet can be reordered
<ion> jbenet: …, b65536
<cryptix> everybody coming to 32c3: if you havnt paid for your ticket: do so MEOW!
<cryptix> heh b65536 - i might hack that in go
<jbenet> also different bases use different alphabets
<jbenet> ion: yeah saw that, i had a similar idea a while back
<jbenet> ion: but i misunderstood unicode
<dignifiedquire> ion: that module
<ion> Even different base58s use different alphabets. :-)
<jbenet> ion: i thought unicode had no cap
<jbenet> ion: i thought unicode was properly universal, no limits to the code, arbitrarily large varints
<jbenet> ion: so, bINFINITY -- can represent any message in one codepoint! unrenderable, but copy-pastable
<achin> <insert joke about all unicode being unrenderable />
<jbenet> alas, UTF-8 has a ~100k codepoint cap. (thanks to UTF-16)
<ion> jbenet: AFAIR Unicode has a cap of 0x10ffff.
<jbenet> right
jfred__ has quit [Ping timeout: 260 seconds]
<ion> That’s about 1 million, not 100k.\
jfred_ has joined #ipfs
<ion> UCS-2 had a limit of 2^16 but UTF-16 is a variable-width encoding which supports all of Unicode.
eject has quit [Remote host closed the connection]
M-rschulman1 has joined #ipfs
<ion> cryptix: More information: https://www.npmjs.com/package/base65536gen
<computerfreak> so what do you guys finally thing, upgrade to version 4 right now or not? ( im using ipfs.io/ipns to share my demo website everything else localgateway) Hard to decide :D
<richardlitt> How do I add an option to a curl request?
<dignifiedquire> richardlitt: don’t use curl, make your life easier: https://www.getpostman.com/
<richardlitt> specifically, trying to figure out how to document `ipfs bootstrap add <mark>-default</mark>`; is that technically a parameter?
<dignifiedquire> that would translate to bootstrap/add?default
<computerfreak> i think u can add ?arg=
<dignifiedquire> named options are converted to named parameters
<dignifiedquire> and additional arguments like the <peer> is translated to an arg array
<dignifiedquire> so `ipfs bootstrap add —default <peer>` translates to `/bootstrap/add?arg=<peer>&default`
Saxonian has joined #ipfs
ulrichard has quit [Read error: Connection reset by peer]
<richardlitt> dignifiedquire: cool.
<richardlitt> dignifiedquire: got sidetracked looking at getpostman.
<dignifiedquire> :P
Matoro has joined #ipfs
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
* daviddias checks what getpostman is
<daviddias> nice
<daviddias> for a moment I thought it would be a distributed PO box that would move my mail automatically closer to me without me having to think about it
<jbenet> no thats gmail
<jbenet> :)
<richardlitt> I want to check a 200 response for `ipfs bootstrap add <peer>`. Should I add a random peer from `ipfs swarm peers`, and then remove them? What do I do in cases where I want to check a 200 but don't actually want to send the command?
<jbenet> dignifiedquire: daviddias: want to add a hangout today for "api" ? i.e. to define the api?
corvinux has joined #ipfs
<daviddias> jbenet: sounds good
<dignifiedquire> jbenet: sure
<daviddias> let's update the meetup list to fit our needs https://github.com/ipfs/pm/issues/67
<richardlitt> jbenet: after the others? I'll add it to the list, set up an etherpad
<daviddias> s/meetup/sprint
<jbenet> i changed it
<jbenet> i removed all the others
<daviddias> testing + protocol + bitswap + data structures won't happen today, right?
<jbenet> (the ones we dont do generally)
<dignifiedquire> daviddias: do we have anything on the agenda for apps today? the only thing I have to talk about is ipfs/distributions and I’m not sure where to but that
<richardlitt> Cool. daviddias: are you set for your two discussions?
<daviddias> I'll remove them from the calendar
<daviddias> dignifiedquire: what is the state of `station?
<dignifiedquire> daviddias: no change since last week
<dignifiedquire> I didn’t have time to work on it, and besides me nobody touches it atm
<jbenet> thanks richardlitt! sorry for all the changes
<richardlitt> daviddias: as in, are those occurring today, and have you // will you set up an agenda for them
<computerfreak> dignifiedquire: i wanted to try it but failed to install it tbh :P
<daviddias> dignifiedquire: let's spend a bit to curate the list of issues
<richardlitt> jbenet: no worries. Was just about to ask you if all of them are happening today
<daviddias> and enable more people to contribute
<daviddias> I know there is more people excited to have station ready for them to use
<dignifiedquire> computerfreak: :/
<dignifiedquire> daviddias: yeah I can imagine :(
<jbenet> woah im leading apps on ipfs? i never knew :] ok
<computerfreak> how is station different from webui?
<richardlitt> !tell whyrusleeping lgierth that they should set up their agendas for their discussions today, and let me know // remove them from the sprint if they're not happening https://github.com/ipfs/pm/issues/67
<dignifiedquire> jbenet: actually I am ;)
<jbenet> oh ok good :)
<jbenet> hahaha
<dignifiedquire> but you are the placeholder so I can’t take any blame :P
<jbenet> sgtm
<dignifiedquire> daviddias: okay lets talk about how to get people contributing to station then ipfs apps toda
<richardlitt> I'm going to remove "Permanent Link" now from the sprint table
tidux has joined #ipfs
<tidux> is something going on with ipns recently?
<richardlitt> sound good?
<tidux> I don't seem to be able to resolve anything from any of my systems
<jbenet> tidux: and your nodes are up?
<computerfreak> tidux: i use it already to show my clients Demo-Websites :D
<richardlitt> Should I ask my question about getting a 200 during the api call today? Never got a response
<tidux> > ipfs swarm peers | wc -l
<tidux> 152
<tidux> yeah it's up
<jbenet> (sorry tidux, ipns is the most alpha part still. its much better than it used to be, but its far from perfect, please report problems, etc)
<jbenet> hmmm and can you resolve it locally?
<jbenet> what version-- btw?
<tidux> this isn't local content
<tidux> this is me trying to find other stuff
<tidux> that I've found before
<daviddias> tidux: what are you trying to resolve?
<jbenet> richardlitt: what 200?
<tidux> /ipns/QmUqBf56JeGUvuf2SiJNJahAqaVhFSHS6r9gYk5FbS4TAn among other things
<richardlitt> jbenet: I want to check a 200 response for `ipfs bootstrap add <peer>`. Should I add a random peer from `ipfs swarm peers`, and then remove them? What do I do in cases where I want to check a 200 but don't actually want to send the command?
<jbenet> just add any valid multihash richardlitt
<achin> i can't seem to resolve it either, tidux
<jbenet> "What do I do in cases where I want to check a 200 but don't actually want to send the command?" --- how could this work?
<jbenet> i cant resolve it either-- hmmm voxelot did you cache it?
<richardlitt> jbenet: No idea. Assumed someone might know some weird trick.
<tidux> yeah I'm trying at both my laptop's localhost:8080 gateway and my public gateway at https://ipfs.borg.moe/
<jbenet> or resolve it just now?
<jbenet> must be a dht forming thing
<voxelot> no cache, just now
<jbenet> we really need to be able to visualize these dht queries
<tidux> yeah I get a path resolve error
<jbenet> tidux: i suspect some churn occluded the record
<achin> whyrusleeping was working on something like that, right?
<jbenet> (on the dht)
<jbenet> (this is a dht problem)
<tidux> I'm still on 0.3.9
<tidux> is there a newer binary?
<jbenet> not yet
micxjo has joined #ipfs
<jbenet> tidux: can you try republishing?
<tidux> it's not my content
<jbenet> (sorry for the inconvenience)
<jbenet> ah
<achin> note that i don't see QmUqBf56JeGUvuf2SiJNJahAqaVhFSHS6r9gYk5FbS4TAn in the network
<tidux> I am, however, the one who published the Tenchi Muyo linked from that page
<tidux> which works fine
<jbenet> i dont see QmUqBf56JeGUvuf2SiJNJahAqaVhFSHS6r9gYk5FbS4TAn either
<achin> so maybe it's expected that it can't be resolved?
<achin> the node recently dropped off the net?
<jbenet> yeah it may not be republishing
<jbenet> tidux: i cant ping QmUqBf56JeGUvuf2SiJNJahAqaVhFSHS6r9gYk5FbS4TAn
<jbenet> (ipfs ping QmUqBf56JeGUvuf2SiJNJahAqaVhFSHS6r9gYk5FbS4TAn)
<jbenet> and probably a few records still out there, which voxelot found-- but the dht churn occluded them
<voxelot> maybe someone resolved that on my gateway at some point
<voxelot> but not getting it locally either
<jbenet> will need to add the "load handling" mechanic from coral so that in this case voxelot would've sent the record to others which it expected would have it too
<jbenet> voxelot: yeah that's possible
randomguy has joined #ipfs
simonv3 has joined #ipfs
strongest_cup has quit [Ping timeout: 260 seconds]
<jbenet> that's the root you're looking for. poor proxy for having ipns working, but hopefully it's useful
<tidux> ok I can resolve the content
<tidux> seems like it's an ipns problem
<tidux> yeah ipns seems completely broken for my node
<tidux> is there some particular files I can blow away to clear the cache for that?
<tidux> *are there
<achin> can you resolve /ipns/QmbuG3dYjX5KjfAMaFQEPrRmTRkJupNUGRn1DXCgKK5ogD ?
<tidux> yes
* dignifiedquire opens dr. pepper and takes a large sip
<achin> ok, that's good at least
<daviddias> wait, Dr pepper in Europe?
<daviddias> that is new to me
* whyrusleeping grumbles about the rain
* whyrusleeping grumbles about the lack of coffee
<achin> best to go back to bed
* whyrusleeping grumbles about the sprint meetings being at 9
<tidux> whyrusleeping: stickers showed up
<tidux> thanks dude :P
<whyrusleeping> tidux: wooo!
border0464 has quit [Ping timeout: 245 seconds]
border0464 has joined #ipfs
* dignifiedquire raises his dr. pepper to whyrusleeping
<richardlitt> dignifiedquire: Can you check this? https://github.com/Dignifiedquire/ipfs-http-api/pull/1
<richardlitt> morning whyrusleeping
<dignifiedquire> richardlitt: looking
<M-mubot> Good day, @freenode_richardlitt:matrix.org
<richardlitt> whyrusleeping: Are you set for your discussions this morning? Are both set to go?
<richardlitt> *just the one, nvm
<ion> daviddias: Dr. Pepper was really difficult to find over ten years ago but it has been pretty available in Finland now.
<whyrusleeping> richardlitt: i dont know if i'm awake enough for words longer than ten letters
<richardlitt> :D
<richardlitt> whyrusleeping: basically, this is just a reminder to have an agenda for the go-ipfs disc. nothing else. :)
<whyrusleeping> thank you for using a smaller word
<ion> “whyrusleeping”
<achin> that's longer than 10 letters!
Matoro has quit [Ping timeout: 256 seconds]
<dignifiedquire> richardlitt: you have mail ;)
<richardlitt> :D
<dignifiedquire> ion: yeah similar in Germany, though I only started drinking it this year
<lgierth> ohai folks
<whyru> there
<whyrus> better
* dignifiedquire finishes 0.5 l of dr. pepper and is ready for all the syncs
<dignifiedquire> whyrus: oh god, now your nick reminds me of walrus even more
<whyrus> :D
<lgierth> i'll be there in 5, feel free to start
<daviddias> ahahaha
Guest34213 has quit [Ping timeout: 252 seconds]
<daviddias> did whyrusleeping and `the walrus` fused ?
<haadcode> hey, are you guys streaming the discussion again (like last week)?
<haadcode> would love to listen in
<haadcode> *discussions
<jbenet> haadcode: we are -- and feel free to just join us if you've time!
<haadcode> jbenet: can't participate, still at work, but can listen
<jbenet> ah sounds good
cemerick has quit [Ping timeout: 260 seconds]
<dignifiedquire> haadcode: though you still have some time, discusions are only starting in 1.5 hours
<haadcode> oh ok
<haadcode> cool
<dignifiedquire> now is only irc sync
<richardlitt> Morning all
<M-mubot> Good morning to you too, @freenode_richardlitt:matrix.org
<richardlitt> Time for the thing?
<kyledrake> Good too early morning and/or too late evening
<tidux> voxelot: I got a weird error with your API gateway
<dignifiedquire> time for everything
robcat has left #ipfs [#ipfs]
<achin> you all could bring back swatch internet time :D
<dignifiedquire> kyledrake: thanks you too
<tidux> "Message": "routing: not found"
<richardlitt> Cool.
<tidux> trying to resolve /ipns/Qmeg1Hqu2Dxf35TxDg18b7StQTMwjCqhWigm8ANgm8wA3p
<richardlitt> Everyone drop your stuff into the sprint
<achin> jbenet: really useful!
speewave has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<whyrus> write my update in that issue?
<richardlitt> Yep
<richardlitt> All updates for this last week's sprint please send to: https://github.com/ipfs/pm/issues/60
Matoro has joined #ipfs
<richardlitt> Are we ready to begin?
devbug has joined #ipfs
* dignifiedquire ready
<richardlitt> dignifiedquire: you're up first. Feel free to go ahead
<dignifiedquire> incoming
<dignifiedquire> ### Sprint Update for @dignifiedquire
<dignifiedquire> - [~] Work on ipfs/distributions
<dignifiedquire> - [~] Implementation https://github.com/Dignifiedquire/distributions/tree/fresh
<dignifiedquire> - [~] Windows CI for go-ipfs (https://github.com/ipfs/go-ipfs/pull/2042)
<dignifiedquire> EOF
* daviddias ready
<richardlitt> thanks dignifiedquire
<dignifiedquire> not so much this week :( but at least got a start on the distributions page including some involved discussions with jbenet and whyrus
Tv` has joined #ipfs
<daviddias> ion: and I already had a great opinion about Finland :)
<dignifiedquire> and windows ci tests are nearly being executed
<achin> how much dr.pepper can you buy with 800 euros?
<tidux> about six diabetes
<richardlitt> lgierth whyrus please add your updates
<dignifiedquire> tidux: I’d say more seven diabetes
<daviddias> dignifiedquire: nice work on designing the distributions page (still looking for that star exploding orange :P )
devbug has quit [Ping timeout: 256 seconds]
<daviddias> dignifiedquire: any love to the API's this week? Anything blocking you on that end?
<dignifiedquire> daviddias: I’m single threaded, that’s what’s blocking me ;)
<whyrus> adding
<randomguy> so there are comin livestreams today? on which google account is the hangout?
* lgierth back
<richardlitt> daviddias: I should have been on that, too. Just figured out how to do it, should have some more coming this week
<daviddias> nice push on getting CI for Windows also :)
<daviddias> richardlitt: got it! Sounds good :)
<dignifiedquire> randomguy: streams incoming in 1h 15min on https://plus.google.com/u/2/b/108638684245894749879/108638684245894749879
<dignifiedquire> randomguy: also links will be posted in irc shortly before starting
<richardlitt> cool. Any more questions for dignifiedquire?
<richardlitt> guess not. daviddias, want to go?
<dignifiedquire> one question before we continue
<dignifiedquire> need some discussion around ipfs/distributions today, which hangout should we put this into?
<dignifiedquire> I would be okay with adding it to apps, as the agenda is just a single item atm
<jbenet> dignifiedquire: awesome stuff on appveyor
<jbenet> really useful
<dignifiedquire> jbenet: thanks, been hacking with node long enough on windows to know some pain points in those transitions
cemerick has joined #ipfs
martink__ has joined #ipfs
<dignifiedquire> ah also I forgot one thing I did this week
<dignifiedquire> - [x] German translation of the webui :)
martink__ has quit [Max SendQ exceeded]
<jbenet> dignifiedquire: getting windows CI in will be a big deal for our windows users.
<richardlitt> dignifiedquire: Shouldn't "home" be "heim" in the Deutsch translation? :)
<jbenet> dignifiedquire: what's left on that btw?
<lgierth> richardlitt: haha :) or zuhause
martink__ has joined #ipfs
<dignifiedquire> richardlitt: not really, as it would confuse people
<dignifiedquire> in the internet everyone knows “home"
<richardlitt> wort
martinkl_ has quit [Ping timeout: 246 seconds]
Matoro has quit [Ping timeout: 256 seconds]
<jbenet> dignifiedquire: what's left for windows / appveyor support?
Qwertie has quit [Ping timeout: 245 seconds]
<dignifiedquire> jbenet: tests are started, but immediately failing because of https://ci.appveyor.com/project/ipfsbot/test/build/66#L93 ( Cannot findteh tests’ local ipfs tool.) which I have no idea what that means
jamie_k_ has joined #ipfs
<jbenet> ah i'll post in the appveyor thread (sharness scripts cannot find ipfs it seems)
<jbenet> ok -- im done asking things
<richardlitt> cool.
<richardlitt> I think daviddias, you're ready to go!
* daviddias incoming
<daviddias> This last week was filled by registry-mirror, Node.js Interactive and delivering classes (which took my full Tuesday and Thursday)
<daviddias> - npm on IPFS
<daviddias> - [x] Got demo working + video (with extra router, antenas and everything, it is pretty sweet I think)
<daviddias> - [x] Add npm to IPFS stravaganza. Much <3 to @whyrusleeping @lgierth and @jbenet for being super present and offloading some of their time to help me get things going faster
<daviddias> - [~] Adding complete npm on Pollux
<daviddias> - [x] ~70000 modules on Castor
<daviddias> - [~] Adding complete npm in @whyrusleeping machine with ipfs --chunker=rabin
<daviddias> - node interactive
<daviddias> - [x] preparation of talk
<daviddias> - [x] flights + logistics
<daviddias> - extra
<daviddias> - [x] Delivering Computation for the Web classes
<daviddias> - [x] &yet call with @jbenet
<daviddias> - [x] some CR
Qwertie has joined #ipfs
<lgierth> word, the demo *is* pretty sweet
<whyrus> reminds me of 5am demo
<dignifiedquire> daviddias: where can I see that demo
<dignifiedquire> daviddias: also any word on uploading your local clone of npm to Castor?
<jbenet> whyrus hahaha 5am demo best demo
<daviddias> dignifiedquire: I'll send it to you (it is going to be public tomorrow)
<lgierth> :):)
<whyrus> :D
<dignifiedquire> daviddias: thanks
<dignifiedquire> 5am demo??
<jbenet> daviddias: great work on all the npm things
<jbenet> daviddias: is the npm add complete yet? what's missing?
<jbenet> daviddias: you have more space in pollux now
<daviddias> dignifiedquire: Castor is slow for what we wanted, so we are making Pollux be the primary node to be responsible to clone npm
devbug has joined #ipfs
<dignifiedquire> daviddias: I see
<jbenet> dignifiedquire: a demo whyrusleeping and i recorded a long time ago, very tired. the recording is ridiculous, but it's a good demo. we'll re-record it some day
<dignifiedquire> jbenet: :D
<dignifiedquire> the next demo should contain whyrus making coffee ;)
<dignifiedquire> daviddias: we forgot to mention “First episode of Coffee talks with Jeromy”
danlyke has joined #ipfs
<jbenet> daviddias: s the npm add complete yet? what's missing?
<daviddias> jbenet: it is not done 'yet', needs a bit more time
<jbenet> daviddias: estimate?
<jbenet> daviddias: is it going to finish?
amade has quit [Quit: leaving]
<daviddias> It had hit disk full somewhere during the night
<jbenet> daviddias: did you resume it?
<daviddias> yes, yes :)
<daviddias> and it is rechecking everything really fast
<daviddias> cause all it has to do is reads now
<daviddias> mumched 25 gigs very fast
<jbenet> daviddias: nice-- estimate for finish? what's it at now?
<whyrus> have you tried running a gc during the add?
<daviddias> like in ~1:30 hours
<daviddias> whyrus: we used it, it works really well :)
<whyrus> yussss
<jbenet> nice!
<jbenet> whyrus: could you add some progress reporting to tar, so that daviddias does not go insane when he tries to add it?
<richardlitt> speaking of whyrus and coffee: now that we have fewer chats, we should really push this whole sync time up an hour or two to let him sleep
<jbenet> (and me)
<whyrus> mmm, okay
<daviddias> it should go at a fast pace till 150GB (out of ~430GB), then it will go back to same speed of adding when it has to hash + write
<dignifiedquire> richardlitt: that would mean I can’t participate in the later calls :(
<richardlitt> dignifiedquire: Ah. Nvm then.
<jbenet> ok thanks daviddias. am done asking questions
<daviddias> I'll keep everyone up to date as I will have more know more and with estimates
<daviddias> right now, the IPFS process is on that phase where it doesn't fully know
<richardlitt> anyone else wqith questions for daviddias ?
<whyrus> daviddias: what is the square root of pi?
wiedi has quit [Quit: ^Z]
<ion> daviddias: What is the answer to the ultimate question about life, the universe and everything?
<jbenet> we're trying to get through this quickly.
<richardlitt> alrigh
<richardlitt> me next
Pharyngeal has quit [Ping timeout: 272 seconds]
<richardlitt> ## This Week
<richardlitt> - [x] Make ipfs/community/pull/65 work again
<richardlitt> - [~] Spec out http api `bootstrap` Dignifiedquire/ipfs-http-api/pull/1
<richardlitt> - [ ] Spec out http api `cat` ipfs/api/issues/7
<richardlitt> - [ ] try out diasdavid/registry-mirror/issues/10, and ipfs/specs/labels/libp2p
<richardlitt> - [ ] Make sure that each IPFS repository has a README.
<richardlitt> - [ ] Find the old Copyright stuff ipfs/community/issues/79
<richardlitt> - [~] Sprint Management
<richardlitt> ## Extra
<richardlitt> - [x] Added IRC to ipfs/pm list of people ipfs/pm/pull/64
<richardlitt> - [x] Proposed shift in sprint todos list ipfs/pm/issues/65
<richardlitt> - [x] Edited sprint master's role ipfs/pm/pull/66
<richardlitt> EOF
<richardlitt> Some community and pr management this week, but didn't get to do all the things I had hoped to do. Better placed as of this morning to do more work on the API - thanks dig.
<daviddias> whyrus ion the answer would be -> Don't use an Intel Atom to figure that out :p
<jbenet> ok thanks richardlitt
<richardlitt> I don't expect a lot of questions for me... so... kyledrake ?
<dignifiedquire> richardlitt: excited for you to get into api docs :)
<jbenet> he may not be here
<richardlitt> cool. I can post his update?
<kyledrake> I'm here
<kyledrake> - [X] work on libp2p-website improvements
<kyledrake> - [~] data persistence guide for IPFS site
<kyledrake> - [X] preparations for BW talk and panel
<kyledrake> - [X] S3 integration research and testing
<daviddias> richardlitt: dignifiedquire me too :)
<richardlitt> :D
<kyledrake> So I -prepared- for BW, but my passport is gone. So I'm in the states for the foreseeable future.
<pjz> ipfs, doing approximately nothing for the weekend, now has a RSS of 3.3GB
<kyledrake> daviddias I'll show you the libp2p stuff this week
<daviddias> kyledrake: isn't there an option in the US to make a new one with 'urgency' mode on, like next day ready?
<kyledrake> daviddias not if you also lost your birth certificate
<lgierth> what's bw?
<daviddias> kyledrake: sweet! If you feel comfortable to work in a open PR, I'm more capable of providing constant feedback
<mappum> I've done that before, didn't need a birth certificate. Think I might have just used my driver's license
<mappum> Or it might have been an expired passport
<kyledrake> mapum you had an expired passport likely
grahamperrin has joined #ipfs
hjoest has quit [Ping timeout: 246 seconds]
<jbenet> lgierth: blockchain workshop -- a conference we were going to present at. (i was at the last one)
<lgierth> ah cool
<jbenet> ok -- next maybe?
<kyledrake> I'm good
<richardlitt> cool
<richardlitt> thanks kyledrake
<richardlitt> whyrus: ?
<whyrus> sure
<whyrus> - [ ] libp2p vendor pathing fixes
<whyrus> - [x] fix pathing in gx itself first
<whyrus> - [ ] ipfs-update tests and feedback
<whyrus> - [ ] osx test failuires
<whyrus> - [ ] dist.ipfs.io work
<whyrus> - [x] ipfs files perf for @diasdavid
<whyrus> - [x] fix go-peerstream deadlock (jbenet/go-peerstream#21)
<whyrus> got completely sidetracked and ended up just making ipfs add really fast:
<whyrus> - [x] ipfs add superfast
Pharyngeal has joined #ipfs
<richardlitt> hahaha
<lgierth> super super fast
<lgierth> <3
<whyrus> super dooper
hjoest has joined #ipfs
<whyrus> this week might be a bit slow for me, hectic end of the year type stuff
<jbenet> whyrus: fantastic work with add-- great improvements for people
<jbenet> whyrus: theres a bit more CR for you on the ipfs-update PR, but otherwise LGTM.
ashark has quit [Read error: Connection reset by peer]
<whyrus> mmmkay, i'll get to that today
<jbenet> whryus: let's focus on shipping 0.3.10. if we need to add a 0.3.11 for straggler things before 0.4.0 so be it. but we should ship 0.3.10 sooner
<whyrus> we can do 0.3.10 right now
<whyrus> everything i need is in master
<daviddias> whyrus: thank you for all of the things! huge ^5! :D
<richardlitt> jbenet: Can we add that as a sprint goal?
<jbenet> richardlitt: yep!
<jbenet> whyrus: is utp in master?
<jbenet> whyrus: i thought utp wasnt in master yet
<whyrus> utp isnt in master yet
<whyrus> i was thinking it was going into 0.4.0
<jbenet> whyrus we had planned to put it in 0.3.10. we can push it to 0.3.11 if we need
<lgierth> let's talk about 0.3.10 later
<jbenet> lgierth: sgtm
* lgierth hungry
<jbenet> ok next then?
<richardlitt> thats yu
<richardlitt> *you, jbenet
<jbenet> ok incoming.
<jbenet> - [x] stellar talk with dm
<jbenet> - [x] call with &yet re apps/post
<jbenet> - [x] discussion with openbazaar
<jbenet> - [x] various org things
<jbenet> - [x] coffee talks: bitswap
<jbenet> - [x] prog-fs demo https://github.com/jbenet/prog-fs
<jbenet> - [x] go-reuseport bugfix https://github.com/jbenet/go-reuseport/pull/11
<jbenet> - [x] Space issue: discuss, move stuff for @daviddias, monitor
<jbenet> - [x] Code Review + Merges:
grahamperrin has left #ipfs ["Leaving"]
<lgierth> yay merges!
ashark has joined #ipfs
<jbenet> (i decided im going to start listing all the CR+Ms i do because those are huge timesinks)
<lgierth> also mucho thanks for helping out with storage
<jbenet> lgierth: np! you too
<lgierth> yeah that's helpful
<richardlitt> jbenet: I think the CR+M idea is a great idea
<jbenet> ok am done
<richardlitt> Cool. Any other comments for jbenet ?
<daviddias> <3 for help with storage stuff
* lgierth next?
<richardlitt> cool. Guess not.
<richardlitt> yep
<richardlitt> go ahead
<lgierth> Lots if interruptions this sprint -- NPM and storage in general are on a good way now,
<lgierth> discovery-cjdns is working too (lacks tests).
<lgierth> - infrastructure
<lgierth> - [x] castor support
<lgierth> - [x] castor/npm firefighting
<lgierth> - [ ] ipv6 for gateway and swarm ipfs/infrastructure#88
<lgierth> - [x] order yubikeys ipfs/infrastructure#122
<lgierth> - [x] order 2 more hetzner storage hosts
<lgierth> - [x] ~~automatic OVH renewal~~
<lgierth> - cjdns
<lgierth> - [~] discover peers from cjdns (todo: tests) ipfs/go-ipfs#1316
<lgierth> - [~] invite to fc00 development (aka go-cjdns) https://github.com/fc00/spec
<lgierth> - go-ipfs
<lgierth> - [~] investigate red builds ipfs/go-ipfs#2001
<lgierth> - [ ] gateway prefix hardening https://github.com/ipfs/go-ipfs/pull/1988/files#r46498115
<lgierth> - [ ] test fs-repo-migrations
<lgierth> - [ ] files api response status codes
<lgierth> - [~] RFM assets: fix gc example ipfs/go-ipfs#2029
<lgierth> - hope
<lgierth> - writable gateway (blocked) ipfs/infrastructure#105
<lgierth> - build a prometheus exporter using :5001
<lgierth> it looks like we won't be getting these two new hosts today :/ got an order confirmation, that was it
s_kunk has quit [Ping timeout: 240 seconds]
<daviddias> lgierth: thank you for dealing with some many infrastructure things
<ipfsbot> [go-ipfs] whyrusleeping created exp/add-tar-mix (+1 new commit): http://git.io/vRiP9
<ipfsbot> go-ipfs/exp/add-tar-mix 565f526 Jeromy: experiment to check disk savings of inline tar expansion...
<daviddias> so no new hetzner hosts for today?
<daviddias> how much time do they typically take? Is there a holiday over there or something?
amingoia has joined #ipfs
<lgierth> yeah i think so :/ they're saying they activate new hosts between 9am and 6pm, and it's 7pm now
<lgierth> just a normal working day. i forgot to order yesterday though and ordered today around 2pm
<whyrus> daviddias: try the exp/add-tar-mix branch out on my machine to see how much space savings we get from expanding tar files during add
<jbenet> daviddias: they prob wouldnt help anyway, you'd need to move stuff to them.
RJ2 has quit []
wiedi has joined #ipfs
e-lima has joined #ipfs
<jbenet> thanks for handling all the hosts infra madness this week lgierth
RJ2 has joined #ipfs
<jbenet> and +1 for ipfs/go-ipfs#2001
<wao_> cjdns get connected with ipfs? woah
<lgierth> oh what i posted is slightly out of date x) the assets PR is merged, and i consider #2001 largely done from my part
<daviddias> jbenet: if I understood well, these new machines would be super close to pollux with high bandwidth link between them, moving the registry would have been fast
<richardlitt> cool
<lgierth> daviddias: yeah probably same datacenter
<jbenet> daviddias: ah. hmm lgierth: think they do rush order? if they were to be ready they might make a big material difference to daviddias tonight
<lgierth> i'll try
<daviddias> whyrus: I need one more machine. Yours is busy with rabin (and not really fast), Pollux is busy with normal add, Castor would never finish in time
<whyrus> hrm...
<whyrus> rabin is slowww
<daviddias> I mean, I need one more machine to use your new ipfs tar
<lgierth> i'll get you one, one way or the other
<daviddias> thank you! What is the plan?
<lgierth> not sure yet -- any more sync updates?
<jbenet> no we're good-- go eat
<lgierth> cool i'll be back in ~45 and then we'll get david some storage
<jbenet> thanks for driving the sprint richardlitt
<lgierth> yeah thanks! :)
<richardlitt> No problem
<richardlitt> Thanks all! Please submit your to dos to the new sprint _by the end of the day_.
<richardlitt> You all know where to find the new sprint. :)
<daviddias> thank you richardlitt :)
<jbenet> anybody else that wanted to talk about anything, feel free to do so now, and/or drop it into the sprint: https://github.com/ipfs/pm/issues/60
<achin> tossing this idea out there: at the end of every sprint, everyone should list zero or one things they completed that they'd like to call attention to. this thing doesn't have to be 100% user-facing (so infrastructure stuff still counts), but the idea is that these things could be used as a quick high-level "this week in IPFS"
<jbenet> achin: i like that idea. maybe one person could crawl the sprint for those things, encouraging people to write stronger update messages?
<daviddias> "This week in IPFS" kind of update
<daviddias> sgtm :)
<achin> jbenet: i tried myself, but i'm missing some context for a lot of these
<achin> it's hard for me to tell what's important enough to deserve a special mention
dignifiedquire_ has joined #ipfs
grahamperrin has joined #ipfs
grahamperrin has left #ipfs [#ipfs]
<dignifiedquire> achin: maybe everyone can add a star to those issues they think are relavant and you could crawl them then ⭐️
<achin> dignifiedquire: yeah, that's what i was thinking
<jbenet> dignifiedquire +1
<achin> just a super simple annotation (plus an optional few words, if they want) on the sprint updates. i'd gather, collate, and hyperlink them
Matoro has joined #ipfs
harlan_ has joined #ipfs
<whyrus> jbenet: so, 0.3.10
dreamdust has left #ipfs [#ipfs]
<whyrus> i mentioned (perhaps unclearly) that i was planning on pushing utp into 0.4.0 here: https://github.com/ipfs/go-ipfs/pull/1789
<dignifiedquire_> IPFS Apps Sync starting in 5min: Hangout: https://hangouts.google.com/call/bouda2johnxgm3f5xh46hp3beya Stream: http://youtu.be/1QsX7-t_eI4
srenatus has quit []
atrapado has joined #ipfs
srenatus_ has joined #ipfs
<jbenet> ahh ok
dignifiedquire_ has quit [Quit: dignifiedquire_]
<whyrus> green light to move forward shipping 0.3.10?
<dignifiedquire> whyrus: what about my headers :(
<whyrus> dignifiedquire: there was some discussion on that
<dignifiedquire> whyrus: talk about that in the api hang later
NeoTeo has quit [Quit: ZZZzzz…]
oguz_ has joined #ipfs
ogzy has quit [Disconnected by services]
oguz_ is now known as ogzy
ogzy has quit [Changing host]
ogzy has joined #ipfs
ogzy_ has joined #ipfs
ygrek_ has joined #ipfs
<ogzy> what does that mean
<ogzy> ipfs repo gc
<ogzy> removed QmcE3zv6MsTwHwWMNX8ZXoeQWY9vSgqkWzAd4nAMPAZF8Y
<ogzy> i closed one of my nodes, and this is the content there?
<ogzy> is there a timeout for serving this content at other nodes?
<achin> the "removed" message means that QmcE3zv6MsTwHwWMNX8ZXoeQWY9vSgqkWzAd4nAMPAZF8Y was downloaded by your node, but it wasn't ever pinned
<achin> so it was always temporary. non-pinned hashes get removed during a repo-gc
rombou has joined #ipfs
martink__ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<ogzy> achin, but i can still see the content of it although it is removed
<achin> how are you "seeing" the content? with "ipfs get" or "ipfs cat" ?
<achin> ok, that's a bit like "ipfs cat"
<achin> so in that case, your node would have re-downloaded the file again
<ogzy> achin, redownload from where?, the node that i added is down
<achin> there might be other nodes that have a copy
<ogzy> achin, so what is the usage of pinning then, can it be because of the some files under a specific size are downloaded automatically?
<achin> you pin stuff that you want to always make available to the network
<ogzy> achin, ok will try with some other content again
<ogzy> achin, i didn't pin the file i added, but it is somehow available
<achin> perhaps it was cached on one of the gateway nodes, if someone requested the hash via ipfs.io/ipfs/<hash>
<ogzy> achin, i am not using gateway, i have three nodes running daemon and i removed bootstrap server part
jamie_k_ has quit [Quit: jamie_k_]
reit has quit [Ping timeout: 245 seconds]
<achin> what does `ipfs dht findprovs QmcE3zv6MsTwHwWMNX8ZXoeQWY9vSgqkWzAd4nAMPAZF8Y` return on your node?
<tidux> unrecognized event type errors
<ogzy> achin, unrecognized event type: 6
<ogzy> QmbA15BdKZw7CuNqmXTYuDK4TzUH2cnGjX3jPyKjgRjQJ1
<ogzy> QmfGaKQwDPjjcigBfHm7584M4yzqyRn536yePBALRcJaEe
<ogzy> error: routing: not found
<whyrus> you can ignore the unrecognized type errors for now, i need to fix those
<achin> ok, those two hashes are peers that are providing the hash in question
<whyrus> but they don't mean anything of great consequence
<ogzy> achin, ok the first one is the node itself
<ogzy> achin, and the other one is the other node up
<ogzy> achin, so whenever the file is accessed from other node, is it downloaded anyway?
rombou has quit [Ping timeout: 250 seconds]
<achin> i'm not quite sure what node you are referring to by "other node"
<ogzy> achin, the nodes that are up and running ipfs daemon, i stopped the ipfs daemon that i added the content
ygrek_ has quit [Ping timeout: 240 seconds]
<achin> using the names NodeA and NodeB, can you say again what you did on what node? you added something on nodeA, then shutdown nodeA, and then tried "ipfs get" on nodeB?
<ogzy> achin, yes excatly
<ogzy> achin, before shutting nodeA, i tried ipfs get from nodeB, then shutdown nodeA, tried again and still accessible also from nodeC
<ogzy> achin, ipfs dht findprovs says it is at nodeB and nodeC
<achin> in that case nodeB would have had a copy
<achin> when did you do the GC?
<ogzy> achin, right after shutting down
<achin> and you ran the GC on what node?
<ogzy> achin, yes
* lgierth infrastructure hangout about to start
<achin> what node?
<ogzy> achin, nodeB
<ogzy> achin, also on nodeC
<ogzy> achin, now i shutdown nodeC, file is inaccesible
<jbenet> dignifiedquire: what's the hangout link for infra?
<dignifiedquire> jbenet: lgierth generating now
<jbenet> i know this is the view link: https://www.youtube.com/watch?v=cqyc0spd51k
<lgierth> ok lets keep this burger for after hangout
corvinux has quit [Read error: Connection reset by peer]
Encrypt has quit [Quit: Eating time!]
dignifiedquire_ has joined #ipfs
<lgierth> thnx
<jbenet> ogzy: ipfs add defaults with pin=true
<jbenet> ogzy ipfs add locally implies pin, it's the natural inclination for a lot of people so we made that choice there.
<jbenet> ogzy: ipfs object put defaults to pin=false.
<dignifiedquire> whyrus: can you please join the infra hangout in 5min or so, will talk about distributions
<whyrus> sounds good
<whyrus> link?
<whyrus> ah, nvm
Matoro has quit [Ping timeout: 272 seconds]
tidux has left #ipfs ["WeeChat 1.0.1"]
Matoro has joined #ipfs
ygrek_ has joined #ipfs
micxjo has quit [Remote host closed the connection]
<daviddias> Let's get that big AWS machine
<dignifiedquire> daviddias: let me quickly check which one would be good for you
<lgierth> back to the burger
<daviddias> dignifiedquire: do you have the link for the next hangout at hand too?
<dignifiedquire> daviddias: jbenet whyrus libp2p
<daviddias> thank you
<dignifiedquire> stream: http://youtu.be/Yefda0XXkLA
<jbenet> dignifiedquire daviddias lgierth i think we want i2.4xlarge or i2.8xlarge
<jbenet> 800GB SSD 16 vCPU
<jbenet> though not sure if that
<jbenet> s enough disk
<dignifiedquire> not enough from what david sais
<dignifiedquire> looking now
<daviddias> I can do it with 800GB if I keep doing repo gc
nomoremoney is now known as victorbjelkholm
<jbenet> correction: 4x800 SSD and 8x800 SSD
<dignifiedquire> yeah
<dignifiedquire> I think i2.4xlarge should be good
* kandinski is attempting to join the libp2p hangout
<dignifiedquire> daviddias: lgierth i2.4xlarge instance: 4x800GB ssd + 122 GB memory, that should make you happy
<kandinski> I don't know how google hangouts work, but do I need to be whitelisted or something?
<dignifiedquire> kandinski: you shouldn’t, what’s the error?
<dignifiedquire> join the youtube stream for now so you can at least hear: http://youtu.be/Yefda0XXkLA
<randomguy> ned plugin
anderspree has quit [Ping timeout: 264 seconds]
hjoest has quit [Ping timeout: 246 seconds]
anderspree has joined #ipfs
<kandinski> thanks
<kandinski> dignifiedquire: thanks a lot. It's that the link on the calendar is outdated.
hjoest has joined #ipfs
<randomguy> is there any chance to filter the sound stream to youtube for better quality?
kvda has quit [Ping timeout: 272 seconds]
border0464 has quit [Ping timeout: 250 seconds]
<dignifiedquire> randomguy: not sure, sound quality is just bad for some atm I think
<kandinski> graphing the dependency tree: that would be quite a interesting reource for re-implmemntations.
border0464 has joined #ipfs
Matoro has quit [Ping timeout: 256 seconds]
<whyrus> those of you who left the hangout, thank you for sparing our bandwidth!
<whyrus> <3
<whyrus> jbenet: do you have tmobile?
<jbenet> whyrus yes
<jbenet> whyrus better than philz wifi
<whyrus> they are giving everyone unlimited data until february
<computerfreak> no problem :)
<computerfreak> but sound is bad in general :/
<achin> only for some people, it seemed
<dignifiedquire> computerfreak: yeah jbenet and daviddias are on coffeeshop wifi
<kandinski> someone please fix the calendar links
gunn has quit [Ping timeout: 245 seconds]
<dignifiedquire> richardlitt: can you please remove all the links from the calendar, they are confusing people
<computerfreak> well ^^ we should consider infrastructure meeting for connection to internet ( or other users without ISP :P )
<kandinski> for the time being I'm only auditing these calls, but a link to the youtube sync on the calendar (I'm subscribed to it) would help
<richardlitt> dignifiedquire: doing it now
<dignifiedquire> kandinski: that’s kinda hard, cause those links are generated just minutes before it starts
<kandinski> I see. I didn't know.
<kandinski> I thought one could have an ongoing meetup ID of some kind.
<dignifiedquire> kandinski: you can follow https://plus.google.com/108638684245894749879 for the events
<kandinski> if only there were a permanent web that could do that...
<kandinski> dignifiedquire: cool, thanks.
<dignifiedquire> kandinski: or subscribe to this youtube channel: https://www.youtube.com/channel/UCdjsUXJ3QawK4O5L1kqqsew
<richardlitt> dignifiedquire: done
<dignifiedquire> richardlitt: thanks
<richardlitt> np
gunn has joined #ipfs
Matoro has joined #ipfs
fazo has joined #ipfs
martinkl_ has joined #ipfs
<randomguy> quick question: can i write qml elements and load it with the browser over ipfs somehow?
cemerick has quit [Ping timeout: 272 seconds]
patcon has joined #ipfs
<jbenet> thanks everyone
<dignifiedquire> Starting in 10 minutes
devbug has quit [Ping timeout: 250 seconds]
voxelot has quit [Ping timeout: 256 seconds]
cemerick has joined #ipfs
patcon has quit [Ping timeout: 256 seconds]
niles has joined #ipfs
evanmccarter has joined #ipfs
pfraze has quit [Read error: Connection reset by peer]
jatb has joined #ipfs
pfraze has joined #ipfs
s_kunk has joined #ipfs
rendar has quit [Ping timeout: 245 seconds]
NeoTeo has joined #ipfs
<richardlitt> api hangout?
<dignifiedquire> richardlitt: still on go-ipfs
<richardlitt> kk
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
rendar has joined #ipfs
Matoro has quit [Ping timeout: 256 seconds]
wiedi has quit [Quit: ^Z]
<jbenet> brb 2m
<dignifiedquire> richardlitt: ^^
fazo has quit [Changing host]
fazo has joined #ipfs
<ion> jbenet: 6.7 nanoseconds
* dignifiedquire sitting lonely in hangouts
cemerick has quit [Ping timeout: 272 seconds]
<dignifiedquire> whyrus: are you joining us?
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
hjoest has quit [Ping timeout: 246 seconds]
hjoest has joined #ipfs
Matoro has joined #ipfs
arpu has quit [Ping timeout: 260 seconds]
wiedi has joined #ipfs
hjoest2 has joined #ipfs
hjoest has quit [Ping timeout: 246 seconds]
slothbag has joined #ipfs
<computerfreak> someone may wanna tell me , what i need to import into my qml file to get var ipfs reuire working? https://github.com/ipfs/js-ipfs-api/blob/master/examples/add.js
<pokeball99> jbenet: what exactly is that?
<computerfreak> everything works, except the ''require'' thing
martinkl_ has joined #ipfs
hjoest2 has quit [Read error: Connection reset by peer]
Encrypt has joined #ipfs
grahamperrin has joined #ipfs
harlan_ has quit [Quit: Connection closed for inactivity]
hjoest has joined #ipfs
M-fil has quit [Quit: node-irc says goodbye]
M-fil has joined #ipfs
ygrek_ has quit [Ping timeout: 272 seconds]
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
<randomguy> where is the path to ipfs executable on linux?
<randomguy> /usr/bin?
jamie_k has joined #ipfs
chriscool has joined #ipfs
<daviddias> jbenet: lgierth about that really nice AWS instance
<daviddias> how can I tap into those credits/account/machine created?
<lgierth> WHO HAS SUMMONED ME
<lgierth> daviddias: there are aws creds in meldium
<achin> randomguy: depends on how you install it
<jbenet> daviddias: get one of:
<jbenet> i2.2xlarge827612 x 800 SSD$1.705 per Hour
<jbenet> i2.4xlarge16531224 x 800 SSD$3.41 per Hour
<jbenet> i2.8xlarge321042448 x 800 SSD$6.82 per Hour
<lgierth> i think the billed amount is taken from the credits
<lgierth> that's what digitalocean does
<daviddias> nice! :D
grahamperrin has quit [Remote host closed the connection]
<lgierth> randomguy: it's in $GOBIN usually
<lgierth> and $GOBIN is usually $GOPATH/bin
<lgierth> and $GOPATH is usually /usr/local/go
<chriscool> Hi everyone!
<achin> ohi
<noffle> jbenet: agreed
_obi_ has joined #ipfs
<jbenet> chriscool hello o/
<M-mubot> Hey @freenode_jbenet:matrix.org, Hello!
<jbenet> chriscool i think dignifiedquire had questions about sharness + windows
<chriscool> \o How are you jbenet?
<dignifiedquire> daviddias: richardlitt jbenet https://github.com/ipfs/api/issues/10 and https://github.com/ipfs/api/pull/9
grahamperrin has joined #ipfs
<jbenet> chriscool doing well
<jbenet> chriscool yourself?
simonv3 has quit [Quit: Connection closed for inactivity]
<chriscool> dignifiedquire ok to discuss sharness + windows though I don't use Windows much
<chriscool> jbenet I am fine
<dignifiedquire> chriscool: not much to discuss tbh, just that it’s a bit tricky to get all running, but I got somewhere now that I’m running everything inside cygwin (env variables are the biggest problem, though I’m not sure that’s an issue with sharness in general or our makefiles)
<dignifiedquire> chriscool: only thing is if you have any specifc tips, or gotchas that you are aware of
<_obi_> hi all; could someone point me to plans/info pertaining to notifications and identities? I'm getting lost in all the repos on github
speewave has joined #ipfs
<chriscool> dignifiedquire unfortunately I never tried to run sharness tests on Windows
<chriscool> did you try to run them first in a git prompt?
patcon has joined #ipfs
<dignifiedquire> jbenet: not finding the place where the ipfs binary is moved into place :(
computerfreak has quit [Remote host closed the connection]
<dignifiedquire> chriscool: I tried running it in powershell :D
<dignifiedquire> which failed, for reasons, like no bash available
<dignifiedquire> I think the git prompt is just a wrapper around cygwin
ygrek_ has joined #ipfs
<chriscool> yeah powershell is perhaps a bit too weird at least at first
<fazo> _obi_: here you can find info about pub/sub (notifications) https://github.com/ipfs/notes/issues/64
atrapado has quit [Quit: Leaving]
<dignifiedquire> chriscool: which is fine, I got at least https://ci.appveyor.com/project/ipfsbot/test/build/66 shareness telling me that it’s failing :)
<fazo> as far as identities go, you can look here: https://github.com/ipfs/POST
<_obi_> fazo: thanks!
<fazo> _obi_: you're welcome :) It can be hard to navigate all the repositories!
<fazo> identities aren't discussed much at the moment, for now the public key of a node is used as identification but there is no standard way to link multiple nodes to the same identity.
<chriscool> git for windows was using msys (http://www.mingw.org/wiki/msys) but it doesn't these days I think
<achin> git now has a native windows build
<_obi_> fazo: ah yes, that's what I figured. I thought it strange at first that node PK is the (user?) identifier; I can imagine a user may want multiple PKs to have personal, group shared or even publicly shared (anonymous) keys/identities
<chriscool> about https://ci.appveyor.com/project/ipfsbot/test/build/66 it looks like the bin directory where binaries are put is not in the PATH
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<dignifiedquire> chriscool: I see, so they have to be in the path, will check that out that should be easy to fix
<chriscool> yeah and sharness has been extracted from git, so git should have the same problem we have to run tests on Windows
<dignifiedquire> achin: right, but the git prompt shipped with git for windows is just much more than just git, it’s bash on windows pretty much, so I don’t think they are replicating everything there
<chriscool> I think bash or a POSIX compatible shell is needed anyway
<dignifiedquire> chriscool: yeah
<chriscool> about the PATH, go-ipfs/test/sharness/lib/test-lib.sh does: PATH=$(pwd)/bin:${PATH}
<chriscool> which should do the job of making the binaries available
<dignifiedquire> but it seems like it isn’t :/
<dignifiedquire> I’ll echo the path right afterwards to check
<chriscool> yeah or it might be a permission problem on the binaries
devbug has joined #ipfs
<dignifiedquire> whyrus: any success with the multipart tip?
ashark has quit [Ping timeout: 256 seconds]
grahamperrin has left #ipfs ["Leaving"]
doublec_ has joined #ipfs
<dignifiedquire> chriscool: which ipfs returns this: https://ci.appveyor.com/project/ipfsbot/test/build/75#L97 even though the path is set correctly
* dignifiedquire scratches head
doublec_ is now known as doublec
jhulten has joined #ipfs
<dignifiedquire> enough for today, tomorrow is another day, thanks chriscool for the tips :)
dignifiedquire has quit [Quit: dignifiedquire]
dignifiedquire_ is now known as dignifiedquire
<chriscool> you are welcome dignifiedquire!
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
jamie_k has quit [Ping timeout: 240 seconds]
jamie_k has joined #ipfs
<whyrus> dignifiedquire: nope, was out running errands
srenatus_ has quit [Quit: Connection closed for inactivity]
<dignifiedquire> whyrus: no errands, fixing bugs and shipping things!
<dignifiedquire> while I’m sleeping :P
simonv3 has joined #ipfs
<whyrus> lol
dignifiedquire has quit [Quit: dignifiedquire]
<whyrus> whoa
<whyrus> i think its actually a bug
<whyrus> in go
<whyrus> YES
<whyrus> IM RIGHT
<whyrus> HA
__jkh__ has left #ipfs [#ipfs]
<richardlitt> !tell dignifiedquire thanks
<richardlitt> hmm
<richardlitt> why does !tell not work?
<whyrus> it looks like a buffer boundary issue
neoteo_ has joined #ipfs
e-lima has quit [Ping timeout: 272 seconds]
neoteo_ has quit [Remote host closed the connection]
<whyrus> yep
<whyrus> fixed in go1.5.2
<whyrus> yayyyy
<whyrus> ipfs now requires go1.5.2
jamie_k has quit [Ping timeout: 240 seconds]
fazo has quit [Quit: fazo]
voxelot has quit [Ping timeout: 272 seconds]