martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
r04r is now known as zz_r04r
NightRa has quit [Quit: Connection closed for inactivity]
hashcore has quit [Ping timeout: 245 seconds]
NeoTeo has quit [Quit: ZZZzzz…]
elgruntox has quit [Quit: leaving]
Tera2342 has joined #ipfs
jhiesey has quit [Read error: Connection reset by peer]
jhiesey has joined #ipfs
computerfreak has quit [Quit: Leaving.]
kvda has joined #ipfs
reit has joined #ipfs
corvinux has quit [Quit: IRC for Sailfish 0.9]
zugz has joined #ipfs
zugzwanged has quit [Ping timeout: 260 seconds]
border0464 has joined #ipfs
cemerick has joined #ipfs
strongest_cup has joined #ipfs
reit has quit [Read error: Connection reset by peer]
micxjo has joined #ipfs
cemerick has quit [Ping timeout: 260 seconds]
Qwertie has quit [Ping timeout: 260 seconds]
jamie_k has joined #ipfs
Not_ has joined #ipfs
evanmccarter has joined #ipfs
Not_ has quit [Quit: Leaving]
Not_ has joined #ipfs
Qwertie has joined #ipfs
jo_mo has quit [Quit: jo_mo]
devbug has quit [Ping timeout: 260 seconds]
leer10 has joined #ipfs
devbug has joined #ipfs
jamie_k has quit [Quit: jamie_k]
cemerick has joined #ipfs
simonv3 has quit [Quit: Connection closed for inactivity]
cemerick has quit [Ping timeout: 245 seconds]
computerfreak has joined #ipfs
computerfreak has quit [Remote host closed the connection]
patcon has quit [Ping timeout: 245 seconds]
dpc_ has joined #ipfs
<dpc_>
Is `ipfs publish` `ipfs resolve` known not to work or is it only me?
<dpc_>
Ah. It worked finally.
reit has joined #ipfs
ygrek_ has joined #ipfs
<achin>
i've been having problems, too
<achin>
(mostly with publish)
M-eternaleye has joined #ipfs
M-eternaleye has quit [Changing host]
M-eternaleye is now known as eternaleye
speewave has joined #ipfs
bbfc has left #ipfs [#ipfs]
<jbenet>
whyrusleeping: i think travis is choking on bitswap tests for fast add stuff-- what might cause this?
<jbenet>
nvm was just taking a long time
ygrek_ has quit [Ping timeout: 246 seconds]
jamie_k_ has joined #ipfs
<ipfsbot>
[go-ipfs] jbenet closed pull request #1961: initial support for commands to use external binaries (master...feat/external-bins) http://git.io/v8dJZ
grahamperrin has joined #ipfs
pfraze has quit [Remote host closed the connection]
dpc_ has quit [Remote host closed the connection]
dpc_ has joined #ipfs
border0464 has quit [Ping timeout: 246 seconds]
border0464 has joined #ipfs
Not_ has quit [Ping timeout: 246 seconds]
grahamperrin has quit [Quit: Leaving]
grahamperrin has joined #ipfs
Matoro has joined #ipfs
ogzy has quit [Ping timeout: 260 seconds]
Tera2342 has quit [Ping timeout: 260 seconds]
dpc_ has quit [Ping timeout: 245 seconds]
dpc_ has joined #ipfs
hjoest has quit [Ping timeout: 246 seconds]
hjoest has joined #ipfs
devbug has quit [Ping timeout: 246 seconds]
patcon has joined #ipfs
reit has quit [Ping timeout: 245 seconds]
dpc_ has quit [Quit: Leaving]
ulrichard has joined #ipfs
reit has joined #ipfs
jamie_k_ has quit [Quit: jamie_k_]
jamie_k_ has joined #ipfs
fass has quit [Ping timeout: 272 seconds]
jamie_k_ has quit [Client Quit]
micxjo has quit [Remote host closed the connection]
sbruce_ has quit [Read error: Connection reset by peer]
sbruce_ has joined #ipfs
SebastianCB has joined #ipfs
NightRa has joined #ipfs
e-lima has joined #ipfs
devbug has joined #ipfs
hjoest has quit [Ping timeout: 246 seconds]
hjoest has joined #ipfs
devbug has quit [Ping timeout: 260 seconds]
devbug has joined #ipfs
Guest53989 has joined #ipfs
grahamperrin has quit [Remote host closed the connection]
evanmccarter has quit [Quit: Connection closed for inactivity]
<noffle>
jbenet: any update on ipfs meetup in bay area / sf?
<jbenet>
noffle: not yet have had zero time to think about it and looks like i need to take off this wed/thu.
<jbenet>
noffle: i'd be free to lead one tue or wed night, but seems way too short notice to rally people together.
speewave has quit [Quit: Going offline, see ya! (www.adiirc.com)]
slothbag has joined #ipfs
s_kunk has quit [Ping timeout: 240 seconds]
rombou has quit [Ping timeout: 260 seconds]
strongest_cup has quit [Ping timeout: 245 seconds]
strongest_cup has joined #ipfs
zz_r04r is now known as r04r
border0464 has quit [Ping timeout: 245 seconds]
border0464 has joined #ipfs
leer10 has quit [Ping timeout: 246 seconds]
rombou has joined #ipfs
<haadcode>
morning o/
<M-mubot>
Good 'aye!, @freenode_haadcode:matrix.org
<zignig>
hey all
Tera2342 has quit [Ping timeout: 260 seconds]
dignifiedquire has joined #ipfs
dignifiedquire has quit [Client Quit]
<davidar>
Hmm, I should fix how mubot names people
CarlWeathers has quit [Ping timeout: 245 seconds]
CarlWeathers has joined #ipfs
<zignig>
davidar: o/
<davidar>
zignig: \o
<zignig>
got some leave so I'm cacluating my programming time at the moment.
<zignig>
I'm looking into getting astralboot to run up the ipfs build and CI infrastructure.
<davidar>
zignig: write a program to do it for you :p
<davidar>
cool
<zignig>
I think that some serious DOGFOODING is needed.
<zignig>
also be able to build the build infrastructure from a single hash is nice and recusive.
<davidar>
indeed
<zignig>
*recursive
<zignig>
Tex.js is looking nice , is there any way to cache it more?
<davidar>
yeah, i haven't put too much effort into optimising it yet
<davidar>
mostly waiting for the arxiv corpus to be html-ified so we have something to test it on
<davidar>
premature optimisation, etc
<zignig>
indeed. I been slurping the xml def file but have not done much with it.
dignifiedquire has joined #ipfs
dignifiedquire has quit [Client Quit]
<davidar>
should be able to do even more cool stuff once the article contents itself is in xml too ;)
rendar has joined #ipfs
RJ2 has quit [Read error: Connection reset by peer]
RJ2 has joined #ipfs
rombou has quit [Quit: Leaving.]
rombou has joined #ipfs
rjeli has quit [Ping timeout: 264 seconds]
CarlWeathers has quit [Read error: Connection reset by peer]
CarlWeathers has joined #ipfs
CarlWeathers has quit [Read error: Connection reset by peer]
rombou has quit [Ping timeout: 246 seconds]
rjeli has joined #ipfs
CarlWeathers has joined #ipfs
guybrush_ is now known as guybrush
guybrush has quit [Changing host]
guybrush has joined #ipfs
sivachandran has joined #ipfs
<sivachandran>
Hi, I am trying to find how well IPFS utilises the network bandwidth. On a 500Mbps communication channel I see IPFS capped at 50Mbps whereas scp able to utilise the max bandwidth. Is there a known limitation in IPFS related to network bandwidth utilisation?
s_kunk has joined #ipfs
s_kunk_ has joined #ipfs
<jbenet>
sivachandran: there shouldn't be. i get higher bw utilization-- can you tell me more about the use case here?
s_kunk has quit [Disconnected by services]
<jbenet>
(what machines?)
CarlWeathers has quit [Ping timeout: 260 seconds]
s_kunk_ is now known as s_kunk
s_kunk has quit [Changing host]
s_kunk has joined #ipfs
<sivachandran>
I trying on two AWS EC2 instances(m4.large type) within same region. iperf reporting 545Mbps between the machines. I am getting almost same bandwidth utilisation when I copy through scp.
<sivachandran>
I initialised ipfs on both instances. Added a large file in one ipfs node and tried ipfs cat of the added file hash in another ipfs node.
<sivachandran>
Measured the time using 'time'.
<jbenet>
sivachandran: the short of it is that there's a ton we need to optimize still -- but what we should do is figure out where go-ipfs is getting stuck in your case.
<sivachandran>
Ok. Are there any parameters that I can experiment? Or is there a place in go-ipfs I can look at it?
<jbenet>
sivachandran: some possibilities: (a) right now bitswap is not able to express full dag path queries, so it may lose some time getting directories in a hierarchy. in practice it's not too bad, but it is inefficient. (b) there's hashing involved, so some of this _may_ be waiting for cpu and poorly utilizing the bw in the meantime (higher concurrency in
<jbenet>
certain areas would fix this easily). (c) bitswap also has a constant limit of blocks being downloaded from a peer. in practice this should be fine, but there may be a problem at play.
NightRa has quit [Quit: Connection closed for inactivity]
<jbenet>
sivachandran: could you get me a profile of the bandwidth utilization of the machine? (sorry we should make it easier to get these with ipfs itself, but not there yet)
rombou has joined #ipfs
<sivachandran>
jbenet: I will try.
bergie has quit [Ping timeout: 264 seconds]
bergie has joined #ipfs
devbug has quit [Ping timeout: 264 seconds]
<jbenet>
sivachandran: looking at the pattern will help me diagnose
dignifiedquire has joined #ipfs
infinity0 has quit [Remote host closed the connection]
<jbenet>
warning: the repo migration not in there yet, so use a different repo
<jbenet>
and the network handshake is different, so you cant talk to <0.4.0 versions.
<achin>
what does the upgrade path look like, then? (how do i serve content to both 0.4.0 nodes and 0.3.x nodes?)
<jbenet>
achin: we're still deciding that. may want to voice concerns + desires in an issue in go-ipfs
<jbenet>
(our goal is to get everyone ready to jump to 0.4.0 together. ofc, not everyone will, so we want to make this as smooth as we can)
<achin>
sure thing. is there an already open issue?
<jbenet>
dont recall.
<achin>
k
<achin>
i'll ponder
<mafintosh>
jbenet: it wasnt me :)
<jbenet>
mafintosh: **shrug**
sivachandran has quit [Quit: Leaving...]
<pepesza>
computerfreak: try using gvm for managing go versions. Worked wonders for me.
patcon has quit [Ping timeout: 260 seconds]
<computerfreak>
which version is ipfs.io running? because i use that to publish my demo-website to the internet ..
<computerfreak>
pepesza: thx, will will try that
<computerfreak>
so its not recomendet to update right now?
robcat has joined #ipfs
robcat has left #ipfs [#ipfs]
robcat has joined #ipfs
Encrypt has quit [Quit: Eating time!]
infinity0 has quit [Ping timeout: 246 seconds]
yaoe has quit [Ping timeout: 246 seconds]
<robcat>
Hi, I'm trying to write an adapter for pacman (the Arch Linux package manager) to download the packages from ipfs directly using their hash
<robcat>
in the package database the sha256sums are available for each package, and my challenge is to convert them into base58 encoded multihashes
<robcat>
I used go-multihash to do the reverse operation (convert the b58hash that results from `ipfs add` and extracting the sha256 hash, but it doesn't match with the output of sha256sum
Nitori has quit [Write error: Connection reset by peer]
Nitori has joined #ipfs
<cryptix>
robcat: do you know about the chunking involved when reading data into ipfs?
Encrypt has joined #ipfs
e-lima has quit [Ping timeout: 245 seconds]
<cryptix>
robcat: id love to see that happen for a bunch of projects but we need an index of full file hash to possible ipfs hashes
<cryptix>
like bittorrent, ipfs splits up files and then constructs a merkle tree/dag from parts of the file
<robcat>
cryptix: about the chunking, I'm using the ipfs default one
<robcat>
size-64
<cryptix>
right now there are three layouts for this, balanced (fixed size blocks) trickle (optimized for streaming) and rabin (totally over the top of my hat, rolling fingerprint something something.. :))
<robcat>
shouldn't be the one used by sha256sum too?
jonl has quit [Ping timeout: 264 seconds]
<cryptix>
hrm.. not sure about how the sha256sum tool is implemented. i'd guess if it did chunking, the hashes would get longer (like bt infohash)
jonl has joined #ipfs
<robcat>
ok, maybe for big files chunking may be a problem...
<robcat>
but I'm trying to make it work with a 4-byte file (i.e. "foo\n")
<cryptix>
looking at your code now
<cryptix>
oh well
<cryptix>
wait
<achin>
if you have the hash of a file (say, from sha256sum), you can't really automatially generate an ipfs hash without going through the "ipfs add" code
<cryptix>
ipfs also wraps data and link objects
<cryptix>
with protobuf etc
<cryptix>
achin: yea, i know - but an index of .md5 .sha1 to known ipfs hashes would be really handy
<cryptix>
for pkg managers and archive lookups alike
<achin>
in general, there is not a one-to-one mapping between a file and an ipfs hash
<achin>
since a single file can have, in theory, many different IPFS hashes
<jbenet>
depending on the dag used to represent it -- the layout
<robcat>
yes, having many hashes is not a problem (I just need one)
<robcat>
and I thought that a simple files didn't need to be wrapped in a larger structure before hashing
<jbenet>
robcat: we've discussed adding the _raw data hash_ to attributes of a file, and using a tree-mode hash like blake2 for that
<cryptix>
achin: i think that can be improved - i wanted to chime in on that last sprint with some questions about this topic wrt s3 hosting for instance.. would be nice to batch request parts of a dag layout
<achin>
(sorry all, in a phone call. be right back)
<jbenet>
robcat: files get wrapped because chunking needs to happen and because of dag formating (we've also discussed keeping raw data addressable, too. so you could do that with small files (<256k)
<cryptix>
robcat: see the plumbing commands 'ipfs object get/put' 'ipfs block get/put'
<jbenet>
robcat would this make a huge difference here?
<jbenet>
robcat: btw, the way that we're doing this for other package managers, like npm
felixn has quit [Ping timeout: 265 seconds]
dreamdust has quit [Ping timeout: 265 seconds]
<robcat>
jbenet: ok, now I get why the wrapping is necessary
<jbenet>
robcat: is to keep a "pkgmgrname -> ipfs hash" map, in an object
<jbenet>
robcat: we could similarly do "pacman hash -> ipfs hash" in an ipfs object
<jbenet>
but doesnt pacman have package names? may want to use that
<robcat>
jbenet: yes sure, pacman already looks up the package info from the package database
<robcat>
and the package database already contains various hashes
<robcat>
the "IPFS hash" could be added quite easily
<jbenet>
robcat: you could (a) put all that info in ipfs itself, and use ipfs as the package database, (b) just look up by name twice, and match the hashes (the object in ipfs land could list the pacman hash too)
<jbenet>
robcat maybe start with (b) and can work toward (a)
<cryptix>
id still like to see an index of known md5/sha1/sha256 to a known ipfs hash
<cryptix>
had a lot of occurences where PKGBUILD failed because ftp was unreachable for instance
<jbenet>
i'll track it and answer as you need
<cryptix>
and although i'd love to see everything on ipfs, it will take some time to get there...
ogzy has joined #ipfs
pod has joined #ipfs
cemerick has joined #ipfs
<achin>
cryptix (et al), my point was simply, you can't just take the hash of a file and convert it into an ipfs hash. you have to actually add the file to ipfs and then record it (perhaps in IPFS itself!)
<cryptix>
achin: +1 with you on that one :)
<ipfsbot>
[webui] digital-dreamer opened pull request #118: German localization (master...l20n-de) http://git.io/vRKSf
thelinuxkid has joined #ipfs
<jbenet>
achin: indeed.
<jbenet>
achin: well-- you _could_ figure out what the hash would be by doing the same thing ipfs would do-- but that's just semantics ;)
edwardk has quit [Ping timeout: 245 seconds]
<achin>
but there is an important difference! the difference is between "what hash can i use to get this file" and "what hash might be used to get this file"
<jbenet>
achin: more generally, what we both mean is: you cant predict what the cryptographic hash of something is until you hash it.
rombou has quit [Ping timeout: 246 seconds]
<achin>
right
bergie has quit [Ping timeout: 245 seconds]
srenatus has quit [Ping timeout: 260 seconds]
<cryptix>
is there news on dag visualization tools? i'd like to understand rabin better
<cryptix>
how it works for different kinds of media etc
<jbenet>
achin: understood-- i mean that in a way, you can treat the formatting as part of "the hash scheme" so ipfs_hash(data) = requires chunking it, forming tree, getting root, etc.
<jbenet>
cryptix: would be great to make a benchmark
kumavis has quit [Ping timeout: 245 seconds]
<cryptix>
yes! varying kings of block size etc..
<cryptix>
(bit bummed you wont make it to 32c3 but i can totally understand - i most likly wont make it to next years if it isnt in hamburg due to similar reasons)
<achin>
jbenet: i'd like to agree, but i'm worried that people will write their own tools for chunking/tree-forming that will not match the impl that's used in go-ipfs
<cryptix>
achin: i dont see that as a problem until you keep to the rules
<jbenet>
achin: that's why we write up specs :) -- people don't create their own versions of "sha256" or of "git data structs"
<jbenet>
achin: but i hear you-- easier to deviate with ipfs.
<jbenet>
achin: specs and tests. this is how any protocols agree-- including implementations of hashes like sha256
strongest_cup has quit [Ping timeout: 245 seconds]
<achin>
i have a pretty massive file that i want to add to IPFS. i happen to know that it has an internal structure that will lend itself to a certain type of chunking. this type of chunking is so specific to my file that go-ipfs can't be expected to do the optimal thing
<achin>
so i'll write a tool to add this file in the optimal way (using the optimal chunking)
<jbenet>
achin: nothing prevents you from reading the sha256 descriptions and rolling your own implementation that ignores details or chosen parameters and thus yields totally different hashes
kumavis has joined #ipfs
strongest_cup has joined #ipfs
speewave has joined #ipfs
<jbenet>
achin: ohhh i see what you mean-- sure and that's fine.
<achin>
so the hash i compute (which will obviously be correct for my given chunking) will be a totally valid hash. and it will be different (also obviously) from the hash that go-ifps computes. but the hash that go-ipfs computes is not the "more correct" hash
<jbenet>
achin: and doing so you know that you won't match other graph layouts of it. if what you're worried about is having too many graphs here-- one solution is to (a) track the raw hash of the data (preferably with a nice hash for it like blake2 https://github.com/ipfs/notes/issues/83) and then (b) maintain an index of all observed "access graphs" (or indices)
<jbenet>
for given data
pokeball99 has quit [Ping timeout: 264 seconds]
<jbenet>
achin: there's a tangential problem here btw. suppose i give you two images, one encoded in jpg and one in png. they're the same image. sort of.
RJ2 has quit [Read error: Connection reset by peer]
anderspree has quit [Read error: Connection reset by peer]
<jbenet>
achin: unix doesnt solve this and treats the two images (which are not byte-for-byte identical, though they are meant to represent the same data, and they can often represent it so well that there's 1:1 mappings)
<jbenet>
treats them as "different files"
<achin>
right. that seems unavoidable to me
<jbenet>
this is an unavoidable problem given how information theory works. and it shows up at many layers of the "data representations" stack
<achin>
backing up 1 small step, even with a single impl of go-ipfs, multiple hashes/multiple trees can be computed for the same file. all the user has to do is use different --chunker arguments
<jbenet>
achin: yes, this is not a problem.
<jbenet>
achin: yes, it decreases dedup. but in practice is not a big problem. and we can mitigate it as i described above.
pfraze has joined #ipfs
<achin>
right. i wasn't suggesting it was a problem (in that problems are things taht need solving). just another demo of the one-to-many relationship between bytes and ipfs hashes
<achin>
i do believe we are all in agreement about all of this :)
<jbenet>
indeed :)
edwardk has joined #ipfs
bergie has joined #ipfs
<robcat>
jbenet: I was reading the multihash specs, and I was thinking
<robcat>
a special multihash prefix could be introduced for files shorter than 256 bytes
pokeball99 has joined #ipfs
<robcat>
e.g. 0x00: No hash
<robcat>
and in this case, the unhashed content could simply be appended to the prefix
RJ2 has joined #ipfs
<robcat>
this would be useful to address simple and universal keywords
anderspree has joined #ipfs
<robcat>
for example, in the semantic web (addressing a concept by keyword, no lookup required)
jonl_ has joined #ipfs
jonl has quit [Ping timeout: 264 seconds]
jonl_ is now known as jonl
jamie_k has joined #ipfs
Matoro has quit [Ping timeout: 245 seconds]
bergie has quit [Read error: Connection reset by peer]
bergie has joined #ipfs
<ion>
Interesting
ekroon has quit [Ping timeout: 264 seconds]
ekroon has joined #ipfs
Guest53989 has quit [Remote host closed the connection]
devbug has joined #ipfs
srenatus has joined #ipfs
<jbenet>
robcat: yeah :) it occurred to me a while back too "the identity hash"
<jbenet>
robcat: yep super useful.
<achin>
would there be any padding?
devbug has quit [Ping timeout: 264 seconds]
<achin>
(it would be reasonable to expect a multihash to have a fixed length)
<jbenet>
achin no, it is not. sha256 and sha3 have different lengths
<computerfreak>
so what do you guys finally thing, upgrade to version 4 right now or not? ( im using ipfs.io/ipns to share my demo website everything else localgateway) Hard to decide :D
<richardlitt>
How do I add an option to a curl request?
<richardlitt>
dignifiedquire: got sidetracked looking at getpostman.
<dignifiedquire>
:P
Matoro has joined #ipfs
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
* daviddias
checks what getpostman is
<daviddias>
nice
<daviddias>
for a moment I thought it would be a distributed PO box that would move my mail automatically closer to me without me having to think about it
<jbenet>
no thats gmail
<jbenet>
:)
<richardlitt>
I want to check a 200 response for `ipfs bootstrap add <peer>`. Should I add a random peer from `ipfs swarm peers`, and then remove them? What do I do in cases where I want to check a 200 but don't actually want to send the command?
<jbenet>
dignifiedquire: daviddias: want to add a hangout today for "api" ? i.e. to define the api?
<dignifiedquire>
daviddias: do we have anything on the agenda for apps today? the only thing I have to talk about is ipfs/distributions and I’m not sure where to but that
<richardlitt>
Cool. daviddias: are you set for your two discussions?
<daviddias>
I'll remove them from the calendar
<daviddias>
dignifiedquire: what is the state of `station?
<dignifiedquire>
daviddias: no change since last week
<dignifiedquire>
I didn’t have time to work on it, and besides me nobody touches it atm
<jbenet>
thanks richardlitt! sorry for all the changes
<richardlitt>
daviddias: as in, are those occurring today, and have you // will you set up an agenda for them
<computerfreak>
dignifiedquire: i wanted to try it but failed to install it tbh :P
<daviddias>
dignifiedquire: let's spend a bit to curate the list of issues
<richardlitt>
jbenet: no worries. Was just about to ask you if all of them are happening today
<daviddias>
and enable more people to contribute
<daviddias>
I know there is more people excited to have station ready for them to use
<dignifiedquire>
computerfreak: :/
<dignifiedquire>
daviddias: yeah I can imagine :(
<jbenet>
woah im leading apps on ipfs? i never knew :] ok
<computerfreak>
how is station different from webui?
<richardlitt>
!tell whyrusleeping lgierth that they should set up their agendas for their discussions today, and let me know // remove them from the sprint if they're not happening https://github.com/ipfs/pm/issues/67
<dignifiedquire>
jbenet: actually I am ;)
<jbenet>
oh ok good :)
<jbenet>
hahaha
<dignifiedquire>
but you are the placeholder so I can’t take any blame :P
<jbenet>
sgtm
<dignifiedquire>
daviddias: okay lets talk about how to get people contributing to station then ipfs apps toda
<richardlitt>
I'm going to remove "Permanent Link" now from the sprint table
tidux has joined #ipfs
<tidux>
is something going on with ipns recently?
<richardlitt>
sound good?
<tidux>
I don't seem to be able to resolve anything from any of my systems
<jbenet>
tidux: and your nodes are up?
<computerfreak>
tidux: i use it already to show my clients Demo-Websites :D
<richardlitt>
Should I ask my question about getting a 200 during the api call today? Never got a response
<tidux>
> ipfs swarm peers | wc -l
<tidux>
152
<tidux>
yeah it's up
<jbenet>
(sorry tidux, ipns is the most alpha part still. its much better than it used to be, but its far from perfect, please report problems, etc)
<jbenet>
hmmm and can you resolve it locally?
<jbenet>
what version-- btw?
<tidux>
this isn't local content
<tidux>
this is me trying to find other stuff
<tidux>
that I've found before
<daviddias>
tidux: what are you trying to resolve?
<jbenet>
richardlitt: what 200?
<tidux>
/ipns/QmUqBf56JeGUvuf2SiJNJahAqaVhFSHS6r9gYk5FbS4TAn among other things
<richardlitt>
jbenet: I want to check a 200 response for `ipfs bootstrap add <peer>`. Should I add a random peer from `ipfs swarm peers`, and then remove them? What do I do in cases where I want to check a 200 but don't actually want to send the command?
<jbenet>
and probably a few records still out there, which voxelot found-- but the dht churn occluded them
<voxelot>
maybe someone resolved that on my gateway at some point
<voxelot>
but not getting it locally either
<jbenet>
will need to add the "load handling" mechanic from coral so that in this case voxelot would've sent the record to others which it expected would have it too
<dignifiedquire>
not so much this week :( but at least got a start on the distributions page including some involved discussions with jbenet and whyrus
Tv` has joined #ipfs
<daviddias>
ion: and I already had a great opinion about Finland :)
<dignifiedquire>
and windows ci tests are nearly being executed
<jbenet>
ah i'll post in the appveyor thread (sharness scripts cannot find ipfs it seems)
<jbenet>
ok -- im done asking things
<richardlitt>
cool.
<richardlitt>
I think daviddias, you're ready to go!
* daviddias
incoming
<daviddias>
This last week was filled by registry-mirror, Node.js Interactive and delivering classes (which took my full Tuesday and Thursday)
<daviddias>
- npm on IPFS
<daviddias>
- [x] Got demo working + video (with extra router, antenas and everything, it is pretty sweet I think)
<daviddias>
- [x] Add npm to IPFS stravaganza. Much <3 to @whyrusleeping @lgierth and @jbenet for being super present and offloading some of their time to help me get things going faster
<daviddias>
- [~] Adding complete npm on Pollux
<daviddias>
- [x] ~70000 modules on Castor
<daviddias>
- [~] Adding complete npm in @whyrusleeping machine with ipfs --chunker=rabin
<daviddias>
- node interactive
<daviddias>
- [x] preparation of talk
<daviddias>
- [x] flights + logistics
<daviddias>
- extra
<daviddias>
- [x] Delivering Computation for the Web classes
<daviddias>
- [x] &yet call with @jbenet
<daviddias>
- [x] some CR
Qwertie has joined #ipfs
<lgierth>
word, the demo *is* pretty sweet
<whyrus>
reminds me of 5am demo
<dignifiedquire>
daviddias: where can I see that demo
<dignifiedquire>
daviddias: also any word on uploading your local clone of npm to Castor?
<jbenet>
whyrus hahaha 5am demo best demo
<daviddias>
dignifiedquire: I'll send it to you (it is going to be public tomorrow)
<lgierth>
:):)
<whyrus>
:D
<dignifiedquire>
daviddias: thanks
<dignifiedquire>
5am demo??
<jbenet>
daviddias: great work on all the npm things
<jbenet>
daviddias: is the npm add complete yet? what's missing?
<jbenet>
daviddias: you have more space in pollux now
<daviddias>
dignifiedquire: Castor is slow for what we wanted, so we are making Pollux be the primary node to be responsible to clone npm
devbug has joined #ipfs
<dignifiedquire>
daviddias: I see
<jbenet>
dignifiedquire: a demo whyrusleeping and i recorded a long time ago, very tired. the recording is ridiculous, but it's a good demo. we'll re-record it some day
<dignifiedquire>
jbenet: :D
<dignifiedquire>
the next demo should contain whyrus making coffee ;)
<dignifiedquire>
daviddias: we forgot to mention “First episode of Coffee talks with Jeromy”
danlyke has joined #ipfs
<jbenet>
daviddias: s the npm add complete yet? what's missing?
<daviddias>
jbenet: it is not done 'yet', needs a bit more time
<jbenet>
daviddias: estimate?
<jbenet>
daviddias: is it going to finish?
amade has quit [Quit: leaving]
<daviddias>
It had hit disk full somewhere during the night
<jbenet>
daviddias: did you resume it?
<daviddias>
yes, yes :)
<daviddias>
and it is rechecking everything really fast
<daviddias>
cause all it has to do is reads now
<daviddias>
mumched 25 gigs very fast
<jbenet>
daviddias: nice-- estimate for finish? what's it at now?
<whyrus>
have you tried running a gc during the add?
<daviddias>
like in ~1:30 hours
<daviddias>
whyrus: we used it, it works really well :)
<whyrus>
yussss
<jbenet>
nice!
<jbenet>
whyrus: could you add some progress reporting to tar, so that daviddias does not go insane when he tries to add it?
<richardlitt>
speaking of whyrus and coffee: now that we have fewer chats, we should really push this whole sync time up an hour or two to let him sleep
<jbenet>
(and me)
<whyrus>
mmm, okay
<daviddias>
it should go at a fast pace till 150GB (out of ~430GB), then it will go back to same speed of adding when it has to hash + write
<dignifiedquire>
richardlitt: that would mean I can’t participate in the later calls :(
<richardlitt>
dignifiedquire: Ah. Nvm then.
<jbenet>
ok thanks daviddias. am done asking questions
<daviddias>
I'll keep everyone up to date as I will have more know more and with estimates
<daviddias>
right now, the IPFS process is on that phase where it doesn't fully know
<richardlitt>
anyone else wqith questions for daviddias ?
<whyrus>
daviddias: what is the square root of pi?
wiedi has quit [Quit: ^Z]
<ion>
daviddias: What is the answer to the ultimate question about life, the universe and everything?
<jbenet>
we're trying to get through this quickly.
<richardlitt>
alrigh
<richardlitt>
me next
Pharyngeal has quit [Ping timeout: 272 seconds]
<richardlitt>
## This Week
<richardlitt>
- [x] Make ipfs/community/pull/65 work again
<richardlitt>
- [~] Spec out http api `bootstrap` Dignifiedquire/ipfs-http-api/pull/1
<richardlitt>
- [ ] Spec out http api `cat` ipfs/api/issues/7
<richardlitt>
- [ ] try out diasdavid/registry-mirror/issues/10, and ipfs/specs/labels/libp2p
<richardlitt>
- [ ] Make sure that each IPFS repository has a README.
<richardlitt>
- [ ] Find the old Copyright stuff ipfs/community/issues/79
<richardlitt>
- [~] Sprint Management
<richardlitt>
## Extra
<richardlitt>
- [x] Added IRC to ipfs/pm list of people ipfs/pm/pull/64
<richardlitt>
- [x] Proposed shift in sprint todos list ipfs/pm/issues/65
<richardlitt>
- [x] Edited sprint master's role ipfs/pm/pull/66
<richardlitt>
EOF
<richardlitt>
Some community and pr management this week, but didn't get to do all the things I had hoped to do. Better placed as of this morning to do more work on the API - thanks dig.
<daviddias>
whyrus ion the answer would be -> Don't use an Intel Atom to figure that out :p
<jbenet>
ok thanks richardlitt
<richardlitt>
I don't expect a lot of questions for me... so... kyledrake ?
<dignifiedquire>
richardlitt: excited for you to get into api docs :)
<jbenet>
he may not be here
<richardlitt>
cool. I can post his update?
<kyledrake>
I'm here
<kyledrake>
- [X] work on libp2p-website improvements
<kyledrake>
- [~] data persistence guide for IPFS site
<kyledrake>
- [X] preparations for BW talk and panel
<kyledrake>
- [X] S3 integration research and testing
<daviddias>
richardlitt: dignifiedquire me too :)
<richardlitt>
:D
<kyledrake>
So I -prepared- for BW, but my passport is gone. So I'm in the states for the foreseeable future.
<pjz>
ipfs, doing approximately nothing for the weekend, now has a RSS of 3.3GB
<kyledrake>
daviddias I'll show you the libp2p stuff this week
<daviddias>
kyledrake: isn't there an option in the US to make a new one with 'urgency' mode on, like next day ready?
<kyledrake>
daviddias not if you also lost your birth certificate
<lgierth>
what's bw?
<daviddias>
kyledrake: sweet! If you feel comfortable to work in a open PR, I'm more capable of providing constant feedback
<mappum>
I've done that before, didn't need a birth certificate. Think I might have just used my driver's license
<mappum>
Or it might have been an expired passport
<kyledrake>
mapum you had an expired passport likely
grahamperrin has joined #ipfs
hjoest has quit [Ping timeout: 246 seconds]
<jbenet>
lgierth: blockchain workshop -- a conference we were going to present at. (i was at the last one)
<whyrus>
got completely sidetracked and ended up just making ipfs add really fast:
<whyrus>
- [x] ipfs add superfast
Pharyngeal has joined #ipfs
<richardlitt>
hahaha
<lgierth>
super super fast
<lgierth>
<3
<whyrus>
super dooper
hjoest has joined #ipfs
<whyrus>
this week might be a bit slow for me, hectic end of the year type stuff
<jbenet>
whyrus: fantastic work with add-- great improvements for people
<jbenet>
whyrus: theres a bit more CR for you on the ipfs-update PR, but otherwise LGTM.
ashark has quit [Read error: Connection reset by peer]
<whyrus>
mmmkay, i'll get to that today
<jbenet>
whryus: let's focus on shipping 0.3.10. if we need to add a 0.3.11 for straggler things before 0.4.0 so be it. but we should ship 0.3.10 sooner
<whyrus>
we can do 0.3.10 right now
<whyrus>
everything i need is in master
<daviddias>
whyrus: thank you for all of the things! huge ^5! :D
<richardlitt>
jbenet: Can we add that as a sprint goal?
<jbenet>
richardlitt: yep!
<jbenet>
whyrus: is utp in master?
<jbenet>
whyrus: i thought utp wasnt in master yet
<whyrus>
utp isnt in master yet
<whyrus>
i was thinking it was going into 0.4.0
<jbenet>
whyrus we had planned to put it in 0.3.10. we can push it to 0.3.11 if we need
<lgierth>
- build a prometheus exporter using :5001
<lgierth>
it looks like we won't be getting these two new hosts today :/ got an order confirmation, that was it
s_kunk has quit [Ping timeout: 240 seconds]
<daviddias>
lgierth: thank you for dealing with some many infrastructure things
<ipfsbot>
[go-ipfs] whyrusleeping created exp/add-tar-mix (+1 new commit): http://git.io/vRiP9
<ipfsbot>
go-ipfs/exp/add-tar-mix 565f526 Jeromy: experiment to check disk savings of inline tar expansion...
<daviddias>
so no new hetzner hosts for today?
<daviddias>
how much time do they typically take? Is there a holiday over there or something?
amingoia has joined #ipfs
<lgierth>
yeah i think so :/ they're saying they activate new hosts between 9am and 6pm, and it's 7pm now
<lgierth>
just a normal working day. i forgot to order yesterday though and ordered today around 2pm
<whyrus>
daviddias: try the exp/add-tar-mix branch out on my machine to see how much space savings we get from expanding tar files during add
<jbenet>
daviddias: they prob wouldnt help anyway, you'd need to move stuff to them.
RJ2 has quit []
wiedi has joined #ipfs
e-lima has joined #ipfs
<jbenet>
thanks for handling all the hosts infra madness this week lgierth
RJ2 has joined #ipfs
<jbenet>
and +1 for ipfs/go-ipfs#2001
<wao_>
cjdns get connected with ipfs? woah
<lgierth>
oh what i posted is slightly out of date x) the assets PR is merged, and i consider #2001 largely done from my part
<daviddias>
jbenet: if I understood well, these new machines would be super close to pollux with high bandwidth link between them, moving the registry would have been fast
<richardlitt>
cool
<lgierth>
daviddias: yeah probably same datacenter
<jbenet>
daviddias: ah. hmm lgierth: think they do rush order? if they were to be ready they might make a big material difference to daviddias tonight
<lgierth>
i'll try
<daviddias>
whyrus: I need one more machine. Yours is busy with rabin (and not really fast), Pollux is busy with normal add, Castor would never finish in time
<whyrus>
hrm...
<whyrus>
rabin is slowww
<daviddias>
I mean, I need one more machine to use your new ipfs tar
<lgierth>
i'll get you one, one way or the other
<daviddias>
thank you! What is the plan?
<lgierth>
not sure yet -- any more sync updates?
<jbenet>
no we're good-- go eat
<lgierth>
cool i'll be back in ~45 and then we'll get david some storage
<jbenet>
thanks for driving the sprint richardlitt
<lgierth>
yeah thanks! :)
<richardlitt>
No problem
<richardlitt>
Thanks all! Please submit your to dos to the new sprint _by the end of the day_.
<richardlitt>
You all know where to find the new sprint. :)
<daviddias>
thank you richardlitt :)
<jbenet>
anybody else that wanted to talk about anything, feel free to do so now, and/or drop it into the sprint: https://github.com/ipfs/pm/issues/60
<achin>
tossing this idea out there: at the end of every sprint, everyone should list zero or one things they completed that they'd like to call attention to. this thing doesn't have to be 100% user-facing (so infrastructure stuff still counts), but the idea is that these things could be used as a quick high-level "this week in IPFS"
<jbenet>
achin: i like that idea. maybe one person could crawl the sprint for those things, encouraging people to write stronger update messages?
<daviddias>
"This week in IPFS" kind of update
<daviddias>
sgtm :)
<achin>
jbenet: i tried myself, but i'm missing some context for a lot of these
<achin>
it's hard for me to tell what's important enough to deserve a special mention
dignifiedquire_ has joined #ipfs
grahamperrin has joined #ipfs
grahamperrin has left #ipfs [#ipfs]
<dignifiedquire>
achin: maybe everyone can add a star to those issues they think are relavant and you could crawl them then ⭐️
<achin>
dignifiedquire: yeah, that's what i was thinking
<jbenet>
dignifiedquire +1
<achin>
just a super simple annotation (plus an optional few words, if they want) on the sprint updates. i'd gather, collate, and hyperlink them
<whyrus>
you can ignore the unrecognized type errors for now, i need to fix those
<achin>
ok, those two hashes are peers that are providing the hash in question
<whyrus>
but they don't mean anything of great consequence
<ogzy>
achin, ok the first one is the node itself
<ogzy>
achin, and the other one is the other node up
<ogzy>
achin, so whenever the file is accessed from other node, is it downloaded anyway?
rombou has quit [Ping timeout: 250 seconds]
<achin>
i'm not quite sure what node you are referring to by "other node"
<ogzy>
achin, the nodes that are up and running ipfs daemon, i stopped the ipfs daemon that i added the content
ygrek_ has quit [Ping timeout: 240 seconds]
<achin>
using the names NodeA and NodeB, can you say again what you did on what node? you added something on nodeA, then shutdown nodeA, and then tried "ipfs get" on nodeB?
<ogzy>
achin, yes excatly
<ogzy>
achin, before shutting nodeA, i tried ipfs get from nodeB, then shutdown nodeA, tried again and still accessible also from nodeC
<ogzy>
achin, ipfs dht findprovs says it is at nodeB and nodeC
<achin>
in that case nodeB would have had a copy
<achin>
when did you do the GC?
<ogzy>
achin, right after shutting down
<achin>
and you ran the GC on what node?
<ogzy>
achin, yes
* lgierth
infrastructure hangout about to start
<achin>
what node?
<ogzy>
achin, nodeB
<ogzy>
achin, also on nodeC
<ogzy>
achin, now i shutdown nodeC, file is inaccesible
<jbenet>
dignifiedquire: what's the hangout link for infra?
<dignifiedquire>
chriscool: not much to discuss tbh, just that it’s a bit tricky to get all running, but I got somewhere now that I’m running everything inside cygwin (env variables are the biggest problem, though I’m not sure that’s an issue with sharness in general or our makefiles)
<dignifiedquire>
chriscool: only thing is if you have any specifc tips, or gotchas that you are aware of
<_obi_>
hi all; could someone point me to plans/info pertaining to notifications and identities? I'm getting lost in all the repos on github
speewave has joined #ipfs
<chriscool>
dignifiedquire unfortunately I never tried to run sharness tests on Windows
<chriscool>
did you try to run them first in a git prompt?
patcon has joined #ipfs
<dignifiedquire>
jbenet: not finding the place where the ipfs binary is moved into place :(
computerfreak has quit [Remote host closed the connection]
<dignifiedquire>
chriscool: I tried running it in powershell :D
<dignifiedquire>
which failed, for reasons, like no bash available
<dignifiedquire>
I think the git prompt is just a wrapper around cygwin
ygrek_ has joined #ipfs
<chriscool>
yeah powershell is perhaps a bit too weird at least at first
<fazo>
_obi_: you're welcome :) It can be hard to navigate all the repositories!
<fazo>
identities aren't discussed much at the moment, for now the public key of a node is used as identification but there is no standard way to link multiple nodes to the same identity.
<_obi_>
fazo: ah yes, that's what I figured. I thought it strange at first that node PK is the (user?) identifier; I can imagine a user may want multiple PKs to have personal, group shared or even publicly shared (anonymous) keys/identities
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<dignifiedquire>
chriscool: I see, so they have to be in the path, will check that out that should be easy to fix
<chriscool>
yeah and sharness has been extracted from git, so git should have the same problem we have to run tests on Windows
<dignifiedquire>
achin: right, but the git prompt shipped with git for windows is just much more than just git, it’s bash on windows pretty much, so I don’t think they are replicating everything there
<chriscool>
I think bash or a POSIX compatible shell is needed anyway
<dignifiedquire>
chriscool: yeah
<chriscool>
about the PATH, go-ipfs/test/sharness/lib/test-lib.sh does: PATH=$(pwd)/bin:${PATH}
<chriscool>
which should do the job of making the binaries available
<dignifiedquire>
but it seems like it isn’t :/
<dignifiedquire>
I’ll echo the path right afterwards to check
<chriscool>
yeah or it might be a permission problem on the binaries
devbug has joined #ipfs
<dignifiedquire>
whyrus: any success with the multipart tip?