<lgierth>
whyrusleeping: mmh yes these make sense - i'm just wondering about these cases where you update the package, and then have to make sure that every go-multihash in the dependency graph is the same
<whyrusleeping>
eh, you cant escape that one
<lgierth>
no?
<lgierth>
mh, it's gonna be even more of a recurring pain for non-ipfs projects which will use e.g. go-libp2p
<lgierth>
or further down the road, some app using multihash and two packages from different maintainers/orgs also using multihash, etc.
anonymuse has quit [Remote host closed the connection]
<lgierth>
is there some cast-this-struct-type-to-another-struct-type-with-the-same-fields thing in golang that i'm missing?
<whyrusleeping>
lgierth: you cant even use the multiaddr interfaces from different packages together
<whyrusleeping>
interfaces don't solve that problem for you
matoro has joined #ipfs
<lgierth>
uuuugh
<lgierth>
ok that makes sense and sad
<whyrusleeping>
yeap
<lgierth>
ok forget everything i said abuot interfaces then
<whyrusleeping>
basically, make any package tons of people are gonna import be perfect
<lgierth>
otherwise it's gonna be 1) go-cid.Cid2, or 2) upgrade pain
<lgierth>
ok
cads has quit [Ping timeout: 255 seconds]
<lgierth>
i guess it fits with the way go get works
<lgierth>
you don't really have versions of packages, everything just flows, or so
espadrine has quit [Ping timeout: 265 seconds]
cemerick has joined #ipfs
kvda has joined #ipfs
Kane` has joined #ipfs
JesseW has joined #ipfs
anewuser has joined #ipfs
anonymuse has joined #ipfs
sametsisartenep has quit [Quit: _·_ zzzZZZ]
firemound has quit [Quit: firemound]
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
<whyrusleeping>
lgierth: yeah... its pretty annoying
<whyrusleeping>
i'm hoping we can have better things soon
dignifiedquire has quit [Quit: Connection closed for inactivity]
<sdgathman>
Yah, that's going to get the relative links I suppose.
<sdgathman>
But not the downloads.
<panicbit-M>
Why would it not? The downloads are relative.
<sdgathman>
But maybe wget can do something with that (follow off site links and copy to subdir, fixing up urls)
<sdgathman>
panicbit-M: Oh! You're right
aarontyree has joined #ipfs
kvda has quit [Remote host closed the connection]
aarontyr_ has quit [Ping timeout: 250 seconds]
<sdgathman>
Now, since freedos doesn't sign their downloads or use https, I would like to see if anyone else has a copy of a download with a different hash. That would be a search by name, however.
<sdgathman>
yeah, wget pretty much did what I want for the copy.
<sdgathman>
Actually not.
<sdgathman>
The links still point to freedos!
<sdgathman>
I guess I just have to manually edit it.
<sdgathman>
But how would the relative links work anyway?
<sdgathman>
The scrape script would have to calculate the hash to create an ipfs link.
<JesseW>
ooh, panicbit-M looks very neat
<JesseW>
er, let me rephrase that
<JesseW>
panicbit-M: ooh, ipfscrape looks very neat
<panicbit-M>
Looks like wget needs to get passed some other parameters
<panicbit-M>
wget actually seems to rewrite the relative links to absolute links
<JesseW>
From 2013-03-13 till 2015-2-17 it had one hash; then sometime between then and 2015-10-7 it changed, and it has remained with the new hash until at least 2016-4-3
cketti has quit [Quit: Leaving]
aarontyree has quit [Ping timeout: 244 seconds]
mildred has joined #ipfs
JesseW has quit [Ping timeout: 255 seconds]
ppham has quit [Remote host closed the connection]
rendar has joined #ipfs
pfrazee has joined #ipfs
ylp has joined #ipfs
bielewelt has joined #ipfs
dignifiedquire has joined #ipfs
cemerick has quit [Ping timeout: 264 seconds]
wuch has joined #ipfs
espadrine has joined #ipfs
firemound has quit [Ping timeout: 244 seconds]
espadrine has quit [Read error: Connection reset by peer]
espadrine has joined #ipfs
pfrazee has quit [Remote host closed the connection]
m0ns00n has quit [Quit: quit]
espadrine has quit [Ping timeout: 240 seconds]
bauruine has quit [Ping timeout: 250 seconds]
m0ns00n has joined #ipfs
mgue has quit [Ping timeout: 240 seconds]
bauruine has joined #ipfs
Ipe_ has joined #ipfs
mgue has joined #ipfs
Ipe_ has quit [Quit: Konversation terminated!]
espadrine has joined #ipfs
plddr has quit [Ping timeout: 255 seconds]
plddr has joined #ipfs
PseudoNoob has joined #ipfs
computerfreak has joined #ipfs
zorglub27 has joined #ipfs
<victorbjelkholm>
panicbit-M, ipfscrape might be a bit old and out of date, haven't updated it in a while... However, it's trivial to just use recursive with wget "wget -r --no-parent https://example.com/downloads"
<victorbjelkholm>
panicbit-M, oh, I see, you got there later... Mind opening a PR with the changes you did to make it work better?
zorglub27 has quit [Ping timeout: 240 seconds]
Encrypt has joined #ipfs
zorglub27 has joined #ipfs
ylp1 has joined #ipfs
ylp has quit [Read error: Connection reset by peer]
silwol1 has left #ipfs [#ipfs]
sametsisartenep has joined #ipfs
Aeon has quit [Ping timeout: 250 seconds]
Aeon has joined #ipfs
computerfreak has quit [Remote host closed the connection]
<dignifiedquire>
🎉 all js-ipfs tests passing with the pull-stream conversion, last thing left to do is a small update to interface-ipfs-core :)
<victorbjelkholm>
order is important I'm guessing?
<daviddias>
victorbjelkholm: it is not :)
<victorbjelkholm>
woo! Ok
<daviddias>
the way the http-api (even in go) is designed
<daviddias>
is to not care about the order
<daviddias>
because of the flush/no flush
<daviddias>
in js-ipfs, we never flush (cause we don't have the flushing yet)
<victorbjelkholm>
ah, all right, I'll fix the test then
<victorbjelkholm>
thanks daviddias
<daviddias>
but in go-ipfs if you add a very large dir witha lot of nested dirs, you will see that go-ipfs will emit more than one hash to the same dir
<daviddias>
victorbjelkholm: 👌🏽
<victorbjelkholm>
oh
PseudoNoob has quit [Remote host closed the connection]
<victorbjelkholm>
dignifiedquire, hm, so if I run the tests from js-ipfs-api with modified interface-ipfs-core, I can be sure I'm not breaking the tests for go-ipfs as well?
<dignifiedquire>
victorbjelkholm: there are no tests for go itself yet (on an api level), just the interactions through the http api from js-ipfs-api
<geoah>
w00t cool domain lol, just noticed it! ipfs.team :D
<geoah>
hm... failed again... `Error: Cannot find module 'pull-stream'`. I'll check it again after a fresh cup of coffee in case I actually broke something :P
<dignifiedquire>
geoah: sounds like a caching issue
<dignifiedquire>
or a bug on js-ipfs-api side
<dignifiedquire>
let me check
pfrazee has joined #ipfs
<dignifiedquire>
geoah: it's a bug on our side, don't worry
matoro has joined #ipfs
ylp has quit [Quit: Leaving.]
Peeves has joined #ipfs
<dignifiedquire>
daviddias: can you please add me as an owner to ipfs-merkle-dag, already pushed and tagged the release on github, but the publish failed
cads has joined #ipfs
ligi_ has joined #ipfs
aarontyree has joined #ipfs
anonymuse has quit [Read error: Connection reset by peer]
anonymuse has joined #ipfs
aarontyr_ has quit [Ping timeout: 244 seconds]
ligi has quit [Ping timeout: 252 seconds]
m0ns00n has quit [Quit: quit]
mildred has quit [Ping timeout: 260 seconds]
Tv` has joined #ipfs
aarontyr_ has joined #ipfs
aarontyree has quit [Read error: Connection reset by peer]
cemerick has quit [Ping timeout: 260 seconds]
firemound has joined #ipfs
cketti has joined #ipfs
cketti has joined #ipfs
cketti has quit [Changing host]
matoro has quit [Ping timeout: 260 seconds]
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<victorbjelkholm>
dignifiedquire, you think you can take a look at the conflict in js-ipfs#323? Seems related to pull-streams and I don't want to destroy it
<dignifiedquire>
jbenet: awesome just in time so I can updatr js-cid :)
herzmeister has quit [Quit: Leaving]
<dignifiedquire>
victorbjelkholm: sure thing will take a look as soon as I got my coffee ;)
herzmeister has joined #ipfs
anonymuse has quit []
firemound has quit [Ping timeout: 252 seconds]
<dignifiedquire>
jbenet: what version would we use on the gateways? the human readable one or the regular one?
aarontyr_ has quit [Remote host closed the connection]
aarontyree has joined #ipfs
zorglub27 has quit [Quit: zorglub27]
ygrek has joined #ipfs
gmcquillan has quit [Ping timeout: 250 seconds]
<dignifiedquire>
victorbjelkholm: will be force pushing a rebased version once the tests pass locally
<dignifiedquire>
so that we can have nice changelogs
rgrinberg has quit [Client Quit]
rgrinberg has joined #ipfs
fxrs has joined #ipfs
ligi_ has quit [Remote host closed the connection]
<jbenet>
dignifiedquire: only ever use the regular one. the human readable one is just for debugging or explaining
<dignifiedquire>
jbenet: okay thanks 👍
<jbenet>
dignifiedquire and it may be "one-way" because delimiters are not guaranteed to be well defined (the human readable codes may have various delimiters in them and CID can't pick one guaranteed not to show up)
ppham has joined #ipfs
<victorbjelkholm>
dignifiedquire, sure thing, thanks for taking a look
Encrypt has quit [Quit: Quitte]
aarontyree has quit []
A124 has quit [Quit: '']
A124 has joined #ipfs
gmcquillan__ has joined #ipfs
gmcquillan__ is now known as gmcquillan
ygrek has quit [Ping timeout: 244 seconds]
jedahan has joined #ipfs
pfrazee has quit [Remote host closed the connection]
danielrf1 has quit [Read error: Connection reset by peer]
Oatmeal has quit [Ping timeout: 250 seconds]
pfrazee has joined #ipfs
espadrine_ has joined #ipfs
espadrine has quit [Ping timeout: 265 seconds]
espadrine_ has quit [Client Quit]
espadrine_ has joined #ipfs
<whyrusleeping>
daviddias: dignifiedquire quit breaking my tests
<daviddias>
?
<daviddias>
ah, running the js-ipfs-api ones?
<whyrusleeping>
js-ipfs-api tests are failing
<dignifiedquire>
whyrusleeping: as soon as daviddias gives me publish rights I can fix it
<dignifiedquire>
so make himn
<daviddias>
is that the real reason?
<dignifiedquire>
victorbjelkholm: need to fix some things related to pull-streams, but getting close
<whyrusleeping>
dignifiedquire: dont push broken stuff in the first place yo
<dignifiedquire>
daviddias: that is the real reason why haven't fixed it yet
<dignifiedquire>
whyrusleeping: I try
<daviddias>
what is the PR? Has it been reviewed? Has it been merged?
<dignifiedquire>
daviddias: I missed a dependency of pull-stream on js-ipfs-merkle-dag
<daviddias>
shame on you :P
<dignifiedquire>
and wanted to relases it as written above
<dignifiedquire>
but I don't have the rights on npm
<dignifiedquire>
so all is pushed to gh and tagged
<victorbjelkholm>
dignifiedquire, yeah, was reading "I managed to find the issue" but I'm not sure I understand the issue 100% and just want to make sure you're not trying to fix something I already had working and might know solution to :)
<dignifiedquire>
no, this is a bug introduced in the pull stream changes
<dignifiedquire>
it's a nasty race condition, that wasn't triggered before
<victorbjelkholm>
ooh, I see, hence I couldn't find where pull-file in files where being used :p
<dignifiedquire>
yeah pull-file is used down the stack, ipfs-repo -> pull-fs-blob-store -> pull-file
<richardlitt>
`ipfs add` and `ipfs pin add` are the same, correct?
<richardlitt>
They both add the hash of what is added to whatever is shown in `ipfs pin ls`
<dignifiedquire>
no they are not
<richardlitt>
dignifiedquire: That's what I thought.
<dignifiedquire>
ipfs pin add, only pins something that is already known to ipfs
<dignifiedquire>
ipfs add, takes a file adds it to ipfs and pins it by default, which can be disabled using a flag
<richardlitt>
so, running ipfs add <hash> and then ipfs pin add <hash> is unnecessary
<dignifiedquire>
yes
<richardlitt>
Thank you.
<richardlitt>
One final question, I think: ipfs add <hash> will be default not pin this recursively
palkeo has quit [Quit: Konversation terminated!]
<dignifiedquire>
it will in the sense that if it is a large file all chunks will be pinned, but for adding a directory you need to pass `-r` to add and pin recursively
<richardlitt>
In that case, why does it say `<hash> recursive`, even if -r isn't specified?
<richardlitt>
echo `sdljflsjlfsjl` | ipfs add
<richardlitt>
ipfs pin ls <correspondingHash> will show:` <correspondingHash> recursive`
<richardlitt>
Why?
<richardlitt>
(thanks dignifiedquire!! trying to get to the bottom of this
<dignifiedquire>
because recursive pin != recursive add
<dignifiedquire>
recursive add => walk unixfs folders recursively and add all of them
<dignifiedquire>
recursive pin => pin this hash, and all hashes that are linked to it
<dignifiedquire>
so in that sense it defaults to recursive pin
<richardlitt>
How are those hashes linked to it?
<dignifiedquire>
the same as "ipfs pin add" defaults to recursive pinning
<dignifiedquire>
if you add a file to ipfs through "ipfs add" it gets put into a specific structure, a dagnode
<dignifiedquire>
that dagnode can have links
<richardlitt>
What format are those links?
<richardlitt>
Where can I read more about those links?
<dignifiedquire>
those are just regular ipfs hashes
espadrine_ has joined #ipfs
<richardlitt>
how do I know if there are links attached to a hash?
<dignifiedquire>
is where you want to look if you want to view those raw merkeldag nodes
<dignifiedquire>
there are options to view the links of an object for example
<richardlitt>
thank you, dignifiedquire. i've got something to do, will be back in a few minutes. working on a blog post, I appreciate the help.
<richardlitt>
Hadn't run across recursive pin vs recursive add before
musicmatze[m] has joined #ipfs
corvinux has joined #ipfs
<musicmatze[m]>
Hi. I just watched two videos on IPFS (Stanford talk and some 15 minute hands on) and wanted to stop by here and say that this looks awesome! I really want to play around with this as soon as possible... And I have some questions, though not now (2300 in my timezone and I have to get up early tomorrow) but after my vacation I will come back and ask a lot of things if you people in here are okay with it. (Sorry for the
<musicmatze[m]>
wall of text)
<lgierth>
musicmatze[m]: you're welcome to -- happy to hear you like it
corvinux has quit [Remote host closed the connection]
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jedahan has joined #ipfs
ashark has quit [Ping timeout: 264 seconds]
* panicbit-M
waves musicmatze
jedahan has quit [Client Quit]
ckwaldon has joined #ipfs
<panicbit-M>
The alpha already looks very promising
jedahan has joined #ipfs
<musicmatze[m]>
panicbit: are you hanging out in all channels? :-) ... I'm afk now... 6 hours left to sleep. See you!
<galois_d_>
Question for anybody who might know: given the IP address and port of a running IPFS daemon (i.e., I know there's an IPFS daemon running on 10.0.0.1 port 4001 TCP, so I can make the string "/ip4/10.0.0.1/tcp/4001/ipfs", but I don't have the hash), is there any way to discover its hash (and thus be able to connect to it as a peer)?
galois_d_ is now known as galois_dmz
xelra has quit [Ping timeout: 260 seconds]
<daviddias>
dignifiedquire: saw your comment about pull-file
<daviddias>
the description of the repo is: "EXPERIMENTAL: Pull streams implementation of a file reader"
<daviddias>
now that stream-to-pull-stream conversion is all good, anything against of using fs.createReadStream?
<dignifiedquire>
daviddias: that would be very inefficient
<daviddias>
dignifiedquire: how so?
<lgierth>
whyrusleeping: yeah and the metrics showed a nice bump of RX/TX bytes
<whyrusleeping>
hot
ckwaldon has quit [Read error: Connection reset by peer]
<dignifiedquire>
pull-file uses fs.open and fs.read directly, if we use the readStream, things have to go through a node stream + a pull-stream + we don't get the laziness of a pull-streams, that it only reads when we actualy need it to read
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
<daviddias>
but we do get the correctness
<dignifiedquire>
not guranteed
<dignifiedquire>
there can still be bugs
<dignifiedquire>
I've been reviewing pull-file closely for the last hours + fs.createReadStream
<daviddias>
ok, from your comment on github sounded that you had concluded that a pull-file fix wasn't possible.
<dignifiedquire>
I'm pretty confident I can get pull-file ready for prime time
<daviddias>
there can still be bugs with the conversion?
<daviddias>
dignifiedquire: sweet:)
<dignifiedquire>
no not at all, I was just tried and had to stop
<dignifiedquire>
if you want to start generating a changelog.md, do a release using "touch CHANGELOG.md && ./node_modules/.bin/aegir-release --first" that will generate the changelog
<dignifiedquire>
(and do a release)
<dignifiedquire>
only needs to be done the fist time around
<dignifiedquire>
*first
cow_2001 has joined #ipfs
ebel has joined #ipfs
<jbenet>
whyrusleeping: confirm pls: the wantlists are updated only by +/- deltas (want and cancel) and not by sending it whole again? (i.e. sent whole only once)
<whyrusleeping>
jbenet: confirmed
<daviddias>
dignifiedquire: 👍
<jbenet>
whyrusleeping thanks
ebel has quit [Ping timeout: 250 seconds]
gmcquillan__ has quit [Quit: gmcquillan__]
taaem has quit [Ping timeout: 250 seconds]
ebel has joined #ipfs
matoro has quit [Ping timeout: 252 seconds]
espadrine_ has joined #ipfs
shizy has quit [Ping timeout: 250 seconds]
gmcquillan__ has joined #ipfs
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
doesntgolf has joined #ipfs
nycoliver has joined #ipfs
xelra has joined #ipfs
PseudoNoob has quit [Remote host closed the connection]
matoro has joined #ipfs
<richardlitt>
Wow!
<richardlitt>
This is great!
fleeky_ has joined #ipfs
<lgierth>
the providers changes in 0.4.3-rc are great
<lgierth>
eeeh in master -- i'm not sure they're in 0.4.3-rc too
<lgierth>
anyhow, i can finally max out my little 2MB downstream