rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<NeoTeo>
Thanks ion that was easy enough - I'll see whether it worked once the change percolates through the system :)
devbug has quit [Read error: Connection reset by peer]
devbug has joined #ipfs
<whyrusleeping>
daviddias: alright, go-libp2p should be all vendored correctly
<whyrusleeping>
and all the deps are under 'gx/<hash>' now
r04r is now known as zz_r04r
<daviddias>
sweet! that is a combo! :D
<daviddias>
10x multiplier of awesome
<whyrusleeping>
:D
<whyrusleeping>
and the global install stuff should work just fine now
<daviddias>
kandinski: I would love to give you a concise answer, but what makes sense to say, right now is, you are looking for a "sha2" hash on a env that supports 2 types of hashes, you keep both ids and use the exact ID for that purpose
<daviddias>
the way that libp2p is built, is to support multiple Peer Routing mechanisms
<daviddias>
and one mechanism might opt to have one sha3000 namespace where it always hashes to that dimension,
<daviddias>
another can support both running at the same time
<daviddias>
our goal, is keep the userspace from breaking changes
<ion>
whyrusleeping: nice
<daviddias>
if you look for QmHASH today, QmHASH today should return the same blob now, 10 years from now or infinity years from now, given that the blob is still available on the network :)
<ion>
(and that specific hash function hasn’t been broken. ;-) )
<kandinski>
daviddias: it was an open question prompted by this github issue on the multihash spec: src="http://foo.com/jquery.js"></script> then foo.com can't swap in a malicious jquery.js into the page that you wrote
<kandinski>
oops, sorry, mispaste
<daviddias>
your idea of padding, could be a good solution too
<kandinski>
in any case, it's up to the application using the multihash
<kandinski>
and IPFS would only have to specify it at the multihash-using layer (like routing)
NightRa has quit [Quit: Connection closed for inactivity]
<daviddias>
if lenght > lenght_of_digest_algorithm yields true
<daviddias>
than it means that someone is not reading the right number of bytes and passing them to multihash, or am I missing a scenario?
<daviddias>
if I pass a buffer with specified length 10 and I actually pass a 12 byte buffer
<daviddias>
since this would all be a pointer
<daviddias>
should only read 10 bytes and discard the remaining two (or not even notice, because in another context, would be a buffer overflow)
jabberwocky has joined #ipfs
<kandinski>
daviddias: if someone gives you a :sha1, 12, <12 bytes> value
<kandinski>
I suggest "reject"
<kandinski>
sorry, that is truncated
<kandinski>
I meant 22, 22
<daviddias>
ah, for sure
<daviddias>
I though we were talking of a sha1, 20, <thing with 24 bytes length>
<kandinski>
oh, that's also "nope" but for a mismatch between stated and actual length.
<kandinski>
at least no risk of overflow.
jabberwocky has quit [Ping timeout: 244 seconds]
<daviddias>
I will agree to that
<kandinski>
sorry, I'm being too terse. At least in the sha1, 22, 22 there is no risk of overflow. You still don't know what the hell that is. jbenet then turned it into a conversation on why you need to make hashes longer at times.
<daviddias>
but yeah, 1st scenario, should be fixed
<kandinski>
I think moreati@github is on it, but I'm following the issue too
<kandinski>
a good thing that us newbs bring to the project is that we really have to *read* the specs
<daviddias>
it would be cool if we have some tests that should throw, but they aren't
<kandinski>
looking forward to helping with the sharness
<daviddias>
sounds awesome !:)
<kandinski>
but even then, the go implementation is moving so fast that the spec is not clear.
<kandinski>
And some of the assumptions need to be written down, because some of them might even be wrong!
<kandinski>
sharness will mostly take the go-ipfs implementation as the reference, right?
<kandinski>
"if go-ipfs diverges from the spec, either one should be fixed stat"
patcon has joined #ipfs
tsenior`` has joined #ipfs
<daviddias>
whyrusleeping: this is so rad: » gx install ◉ ◼◼◼◼◼◼◼◼◼◼
<daviddias>
installing package: go-libp2p-1.0.0
<daviddias>
installing package: go-log-0.0.0
<daviddias>
installing package: go-logging-0.0.0
<daviddias>
installing package: context-1.0.0
<daviddias>
installing package: go.uuid-0.0.0
<daviddias>
the sweet sound of gx
Smilex has quit [Quit: WeeChat 1.3]
devbug has quit [Read error: Connection reset by peer]
<whyrusleeping>
okay, did you set GO15VENDOREXPERIMENT to 1?
<whyrusleeping>
(either do that, or 'gx install --global'
voxelot has quit [Ping timeout: 240 seconds]
<daviddias>
<daviddias>
oh, gx install --global sounds more like what I wanted to do
<daviddias>
or maybe not
guest234234 has joined #ipfs
<daviddias>
sweet
<daviddias>
got it
<whyrusleeping>
either way :)
<whyrusleeping>
the global thing puts all the packages in your global GOPATH namespace
<daviddias>
all global looks nice
<daviddias>
there won't be conflicts after all
<daviddias>
npm could be the same
<daviddias>
npm i --global for everything
<daviddias>
Node knows where to find it
<whyrusleeping>
yeap!
<daviddias>
» gx install --global ◉ ◼◼◼◼◼◼◼◼◼◼
<daviddias>
installing package: go-libp2p-1.0.0
<daviddias>
and gets stale there
<daviddias>
seems it is installing my current project to the global name space
<whyrusleeping>
what hash are you using?
<whyrusleeping>
for go-libp2p?
<daviddias>
go get -u github.com/ipfs/go-libp2p
<daviddias>
gx install --global
<whyrusleeping>
so it hangs after saying 'go-libp2p-1.0.0' ?
<daviddias>
ep
<daviddias>
yep
<ipfsbot>
[go-ipfs] rht opened pull request #1975: placeholder pr to rerun the test on djdv path parser pr (master...djdv) http://git.io/v4XuL
tsenior`` has quit [Ping timeout: 276 seconds]
<ipfsbot>
[go-ipfs] rht closed pull request #1975: placeholder pr to rerun the test on djdv path parser pr (master...djdv) http://git.io/v4XuL
ekaron has quit [Quit: Leaving]
chriscool has quit [Read error: No route to host]
chriscool has joined #ipfs
devbug has quit [Read error: Connection reset by peer]
NeoTeo has quit [Quit: ZZZzzz…]
<whyrusleeping>
daviddias: could be that it was trying to find the packages through ipfs and being slow about it
chriscool has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
domanic has joined #ipfs
surajravi has quit [Remote host closed the connection]
domanic has quit [Ping timeout: 260 seconds]
domanic has joined #ipfs
chriscool has quit [Ping timeout: 240 seconds]
ricmoo has joined #ipfs
simonv3 has quit [Quit: Connection closed for inactivity]
surajravi has joined #ipfs
surajravi has quit [Remote host closed the connection]
JasonWoof has quit [Read error: Connection reset by peer]
devbug has joined #ipfs
JasonWoof has joined #ipfs
surajravi has joined #ipfs
danielrf3 has joined #ipfs
guest234234 has quit [Ping timeout: 240 seconds]
kitcambridge_ has joined #ipfs
M-rschulman1 has quit [Ping timeout: 250 seconds]
danielrf2 has quit [Ping timeout: 250 seconds]
noffle has quit [Ping timeout: 250 seconds]
tymat has quit [Ping timeout: 250 seconds]
barnacs_ has quit [Ping timeout: 250 seconds]
M-jfred has quit [Ping timeout: 250 seconds]
alu has quit [Ping timeout: 250 seconds]
lidel has quit [Ping timeout: 250 seconds]
orzo has quit [Ping timeout: 250 seconds]
Rylee has quit [Ping timeout: 250 seconds]
rektide_ has quit [Ping timeout: 250 seconds]
deficiency has quit [Ping timeout: 250 seconds]
jfred has quit [Ping timeout: 250 seconds]
arpu has quit [Ping timeout: 250 seconds]
kitcambridge has quit [Ping timeout: 250 seconds]
ebarch has quit [Ping timeout: 250 seconds]
vakla has quit [Ping timeout: 250 seconds]
yaoe has quit [Ping timeout: 250 seconds]
kitcambridge_ is now known as kitcambridge
alu has joined #ipfs
barnacs has joined #ipfs
orzo has joined #ipfs
lidel has joined #ipfs
arpu has joined #ipfs
noffle has joined #ipfs
tymat has joined #ipfs
yaoe has joined #ipfs
jfred has joined #ipfs
rektide has joined #ipfs
vakla has joined #ipfs
Rylee has joined #ipfs
ebarch has joined #ipfs
M-rschulman1 has joined #ipfs
jabberwocky has joined #ipfs
M-jfred has joined #ipfs
jabberwocky has quit [Ping timeout: 240 seconds]
<livegnik>
Is there something like a pastebin for IPFS, or even for PDF documents?
<livegnik>
It would be nice to have a place to upload papers to. Papers that are related to IPFS, Bitcoin, and crypto in general. A lot of them are hosted by folks themselves, and some have been up for over a decade.
<livegnik>
It wouldn't amaze me if some of those servers go down at some point, and losing papers that are hosted on those servers would be a true loss to everyone living in the future.
<lgierth>
livegnik: checkout the archives repo :) github.com/ipfs/archives
<lgierth>
livegnik: checkout the archives repo :) github.com/ipfs/archives
<lgierth>
oops
<livegnik>
Word! Cheers. :)
<ansuz>
rule 34
<ansuz>
if it exists, there's an ipfs version
<ansuz>
no exceptions
<livegnik>
Looking at all the commits, I really like the amount of effort that people are putting into IPFS. I should really free up some time to read further into it.
<livegnik>
Rule 34, noted. Thank you. :)
devbug has quit [Read error: Connection reset by peer]
patcon has quit [Ping timeout: 276 seconds]
<padz>
ipfsbin.xyz
doublec_ has joined #ipfs
doublec has quit [Ping timeout: 272 seconds]
doublec_ is now known as doublec
surajravi has quit [Remote host closed the connection]
doublec_ has joined #ipfs
<whyrusleeping>
lol, it took me a little bit to find out why we were talking about rule34
doublec has quit [Ping timeout: 255 seconds]
guest234234 has joined #ipfs
ygrek has quit [Ping timeout: 276 seconds]
reit has joined #ipfs
voxelot has joined #ipfs
koo7 has quit [Ping timeout: 260 seconds]
<ipfsbot>
[go-ipfs] lgierth force-pushed discovery-cjdns from 772931d to ba4f31e: http://git.io/v4K3P
<ipfsbot>
go-ipfs/discovery-cjdns 492742f Lars Gierth: cjdns: query admin api for possible peers...
<ipfsbot>
go-ipfs/discovery-cjdns ba4f31e Lars Gierth: cjdns: dial possible peers...
<ipfsbot>
go-ipfs/discovery-cjdns 76785ec Lars Gierth: bootstrap: remove severly outdated comment...
jabberwocky has joined #ipfs
<ipfsbot>
[go-ipfs] lgierth force-pushed discovery-cjdns from ba4f31e to 3bf0875: http://git.io/v4K3P
<ipfsbot>
go-ipfs/discovery-cjdns 3bf0875 Lars Gierth: cjdns: dial possible peers...
<lgierth>
whyrusleeping: i can't quite get secio to shake hands over cjdns
<whyrusleeping>
you shouldnt have to ever run it manually again (ipfs-update is going to do it for you)
<whyrusleeping>
but some field testing would be nice
<whyrusleeping>
make sure to backup your ipfs repo beforehand
<whyrusleeping>
(i've run it on all my repos and it came out perfectly, but until more people run it, i'm advising on the side of caution)
* achin
does a zfs snapshot and tries an upgrade
infinity0 has quit [Remote host closed the connection]
amstocker_ has joined #ipfs
domanic has joined #ipfs
infinity0 has joined #ipfs
amstocker has quit [Ping timeout: 255 seconds]
<achin>
if i can figure out how to build it...
<achin>
not sure how to fix ipfs-1-to-2/go-datastore/key.go:7:2: cannot find package "github.com/ipfs/go-ipfs/Godeps/_workspace/src/code.google.com/p/go-uuid/uuid" in any of:
lgierth has quit [Quit: WeeChat 1.1.1]
patcon has joined #ipfs
<whyrusleeping>
achin: hrm...
<whyrusleeping>
achin: thats a bug
<whyrusleeping>
fixing
harlan_ has joined #ipfs
lgierth has joined #ipfs
<whyrusleeping>
achin: update and try again?
CarlWeathers has joined #ipfs
<achin>
excellent. built, running now
TheWhisper has quit [Ping timeout: 246 seconds]
<lgierth>
the secio thing doesn't seem to be a master vs. dev0.4.0 issue either
<lgierth>
i'll try again tomorrow
* lgierth
zz
<achin>
(migration still running. repo is only about 1.7GB)
<whyrusleeping>
achin: hrm...
<whyrusleeping>
mines 2.1GB and only took a couple seconds
<achin>
last thing it printed to console was 'transfered indirect pins'
<achin>
ah ha, just finished
<whyrusleeping>
ah, you likely have way more pins than i do
<whyrusleeping>
its an unfortunately slow operation
<whyrusleeping>
i'll try to speed that up
<whyrusleeping>
or provide some feedback as to progress
<achin>
likely. i can count them, if you'd like
<whyrusleeping>
sure
<achin>
$ ./ipfs pin ls --type=all |wc -l
<achin>
12764
<achin>
of all my pins, 12627 are indirect
amstocker__ has joined #ipfs
<achin>
migration seems to be OK. the 0.4.0 daemon starts up, and some basic ipfs commands are working
amstocker_ has quit [Ping timeout: 246 seconds]
bedeho has joined #ipfs
kanzure has quit [Ping timeout: 240 seconds]
padz has quit [Ping timeout: 276 seconds]
padz has joined #ipfs
zugz has quit [Ping timeout: 276 seconds]
ygrek has joined #ipfs
bedeho has quit [Ping timeout: 264 seconds]
zugz has joined #ipfs
<whyrusleeping>
achin: okay, good to know
<whyrusleeping>
achin: alright, i pushed some updates, should be faster if you care to try again
<achin>
k
<whyrusleeping>
should also give more feedback
<achin>
restoring ~/.ipfs from backup...
<achin>
ok, running now 7bbfcc68cca3f9648e39342a718e443f2de5df16
voxelot has quit [Ping timeout: 272 seconds]
<achin>
woah, took like... 0 seconds
<ion>
nice
<whyrusleeping>
lol
<tperson>
nice, or broken?
padz has quit [Ping timeout: 240 seconds]
<tperson>
Seems too fast lol
<whyrusleeping>
nope, just whyrusleeping not being stupid anymore
<whyrusleeping>
'hey, lets read every single indirect pin from the old repo, write them to the new repo, and then just delete them'
<tperson>
My guess is no change to fsrepo?
<whyrusleeping>
nope
<whyrusleeping>
just to the migration
<achin>
ipfs pin ls --type=all is still producing correct-looking data, so... \o/
padz has joined #ipfs
timgws has joined #ipfs
<tperson>
Does Intel really have a trademark on Core?
<tperson>
I just noticed there is a tm after Core in Intel Core
<ion>
IP®FS™
<ion>
patent pending
kanzure has joined #ipfs
kanzure has quit [Client Quit]
diadelphian has joined #ipfs
doublec_ is now known as doublec
domanic has quit [Ping timeout: 240 seconds]
devbug has quit [Ping timeout: 240 seconds]
padz has quit [Ping timeout: 276 seconds]
NightRa has joined #ipfs
wrouesnel2 has joined #ipfs
TheWhisper has joined #ipfs
padz has joined #ipfs
diadelphian has quit [Ping timeout: 260 seconds]
devbug has joined #ipfs
chriscool has joined #ipfs
zugz has quit [Read error: Connection reset by peer]
shyamsk has quit [Ping timeout: 244 seconds]
zugz has joined #ipfs
surajravi has joined #ipfs
surajravi has quit [Client Quit]
timgws has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
shyamsk has joined #ipfs
timgws has joined #ipfs
harlan_ has quit [Quit: Connection closed for inactivity]
chriscool has quit [Ping timeout: 260 seconds]
merlijn_ has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to dev0.4.0: http://git.io/v41Py
<ipfsbot>
[js-ipfs-api] Dignifiedquire created greenkeeper-request-2.67.0 (+1 new commit): http://git.io/v41HH
<ipfsbot>
js-ipfs-api/greenkeeper-request-2.67.0 ec435af greenkeeperio-bot: chore(package): update request to version 2.67.0...
e-lima has joined #ipfs
<whyrusleeping>
working out later at night sucks
<whyrusleeping>
daviddias: you around>
<whyrusleeping>
?
<ipfsbot>
[js-ipfs-api] Dignifiedquire deleted greenkeeper-request-2.67.0 at ec435af: http://git.io/v41QE
Tv` has quit [Quit: Connection closed for inactivity]
mercurytw has joined #ipfs
<mercurytw>
hi. I'm trying to use the webui but my node info page is blank, and file uploads don't seem to work
<mercurytw>
any idea what might cause this?
<whyrusleeping>
mercurytw: youre running the latest version of ipfs?
<mercurytw>
0.3.10-dev
<whyrusleeping>
okay, what os and browser?
<mercurytw>
accessing the webui via firefox 42.0 / win 10, deployed on Fedora 42 server edition
<mercurytw>
*Fedora 43
<whyrusleeping>
you mean fedora 23?
warner has quit [Ping timeout: 250 seconds]
<mercurytw>
yes
<whyrusleeping>
okay, i'm trying it out locally
<whyrusleeping>
i havent actually looked at the webui in a while to be honest
<mercurytw>
yeah I'm running this on a headless server so it seems like it'd be really convenient
<whyrusleeping>
yeah, agreed
<mercurytw>
oh man i typed fedora 43 lol I just noticed that
<whyrusleeping>
my internet is slower than a gimped turtle right now
<whyrusleeping>
i'm having trouble even connecting to people
<mercurytw>
sorry it's been a long day
<mercurytw>
thats a bummer. who's your provider?
zz_r04r is now known as r04r
<whyrusleeping>
i was hoping you didnt mean 43 lol, a friend just asked me for help installing 23 last week
<whyrusleeping>
i think my provider is apogee
<mercurytw>
hm never heard of them
<whyrusleeping>
its like packaged with the apartment i'm renting
<whyrusleeping>
the connection actually gets more stable when i use a vpn
<mercurytw>
oh those local providers are actually a good thing for the economy tbh
<whyrusleeping>
which is rediculuos
<whyrusleeping>
oh agreed, i'm all for supporting the little guy
<whyrusleeping>
just wish i could get more than 12/2mb and do better than an average 1% packet loss
<mercurytw>
at least you're providing competition to the big guys... I've actually been considering switching to a competitor just for the sake of pushing ISPs in a good direction
<mercurytw>
lol yeah...
<whyrusleeping>
okay, i've got the webui pulled up, everything seems fine on my end
<whyrusleeping>
i can see my peer ID
<whyrusleeping>
and peers on the globe view
<mercurytw>
hm ok
<whyrusleeping>
does anything show up in your developer console?
<mercurytw>
in my dev console?
<mercurytw>
sorry I literally just installed this like 10 mins ago
<whyrusleeping>
i think its F12 on firefox
<mercurytw>
oh taht
<whyrusleeping>
or, ctrl shift k
<whyrusleeping>
yah
<whyrusleeping>
its normally where i look when webpages are doing wrong things
warner has joined #ipfs
<mercurytw>
ah hm
<mercurytw>
looks like im getting a bunch of 403's
<whyrusleeping>
interesting...
<whyrusleeping>
are you doing 127.0.0.1:5001? or localhost:5001?
<mercurytw>
neither. I'm accessing remotely
<whyrusleeping>
oh!
<whyrusleeping>
yeah
<whyrusleeping>
thats why
<mercurytw>
ah no wonder. it's pointing at the wrong gateway
<whyrusleeping>
you'll need to change the config file a bit
<whyrusleeping>
and make sure that machine isnt accessible by anyone else
<whyrusleeping>
Addresses.API will need to be 0.0.0.0 (or a lan address) instead of 127.0.0.1
<mercurytw>
I set addressses.api and addresses.gateway to my static IP, and opened the ports on the server's firewall
<mercurytw>
otherwise i wouldn't be seeing any webui at all
<mercurytw>
it seems like the code is still trying to access the gateway at 127.0.0.1 though
<mercurytw>
let me see where this call is coming from...
<whyrusleeping>
ah, okay
<whyrusleeping>
the other thing you need to change is: git checkout -b rht-no-chunk-channel dev0.4.0
<haadcode>
dignifiedquire: hey morning. great news on the streaming! did that fix the problem of not starting the stream immediately, also? good work in any case, hoping to see the PR in master and npm soon.
dignifiedquire_ has joined #ipfs
merlijn_ has quit [Ping timeout: 260 seconds]
merlijn_ has joined #ipfs
kerozene has quit [Ping timeout: 255 seconds]
<mercurytw>
and I'm still gettting a couple errors on the page
<mercurytw>
but just in the logs
<mercurytw>
ah ok
<mercurytw>
lets try
<mercurytw>
that's the magic sauce
<mercurytw>
thank you dude
M-davidar-test has quit [Ping timeout: 276 seconds]
farfetchedness has joined #ipfs
<haadcode>
mercurytw: np
M-oddvar1 has quit [Ping timeout: 276 seconds]
<mercurytw>
can someone test my connection?
<whyrusleeping>
mercurytw: give your peer id
<mercurytw>
at /ipfs/QmSgpWm4RBWzoJWV61NfHVgXn6fSzRe7Zv7BEcoQ5aKVv5
<whyrusleeping>
anyways, its way past the time where i normally pass out for a third of the day
<whyrusleeping>
so i'm gonna go do that
<mercurytw>
alright thanks a ton for the help dude
<mercurytw>
you're the ipfs dev right?
<whyrusleeping>
if you run into anything while i'm gone feel free to file an issue
<whyrusleeping>
yeah, i'm on go-ipfs
<mercurytw>
thanks for making such a cool service
<whyrusleeping>
:) we try!
merlijn_ has quit [Quit: Lost terminal]
<dignifiedquire>
good morning everyone
<dignifiedquire>
and good passing out whyrusleeping ;)
s_kunk has joined #ipfs
harlan_ has joined #ipfs
guest234234 has quit [Read error: Connection reset by peer]
<mercurytw>
hrm maybe I'll make a setup script to do all this stuff
<mercurytw>
are there any plans to make the ipfs daemon daemonize properly?
pepesza has joined #ipfs
ygrek has quit [Ping timeout: 246 seconds]
CarlWeathers has quit [Ping timeout: 250 seconds]
mercurytw has left #ipfs [#ipfs]
farfetchedness has quit [Killed (Sigyn (Spam is off topic on freenode.))]
phenylglyoxylic has joined #ipfs
<dignifiedquire_>
daviddias: did you have a chance to check out my wreck branch yet?
phenylglyoxylic has quit [Killed (Sigyn (Spam is off topic on freenode.))]
<daviddias>
Morning:)
<daviddias>
whyrusleeping: I am now :)
jadeite has joined #ipfs
<daviddias>
dignifiedquire_: will take a look
<dignifiedquire_>
daviddias: :)
<haadcode>
+1 for that :)
<haadcode>
looking forward to seeing the master fixed
<dignifiedquire_>
daviddias: also it seems you were right in terms of speed, it seemed to me that things were getting faster after switching to wreck, but I don’t have any numbers
simpbrain has quit [Quit: Leaving]
dignifiedquire has quit [Quit: dignifiedquire]
dignifiedquire_ is now known as dignifiedquire
simpbrain has joined #ipfs
<dignifiedquire>
haadcode: daviddias any ideas how I could write a test to ensure streaming is started immediately instead of buffering?
<haadcode>
hmmm
NeoTeo has joined #ipfs
CarlWeathers has joined #ipfs
<haadcode>
dignifiedquire: I suppose you could check that the stream gets piped before 'close' or 'end' or 'finish' or whatever event the stream passes when it has finished (eg. called with passThrough.end() equivalent in the new code)
<daviddias>
dignifiedquire: this is top!! :D :D thank you for #125
bret-raspi has quit [Ping timeout: 240 seconds]
<daviddias>
dignifiedquire: the most solid way to ensure it would always have the same result would be to mock the API on that call
<daviddias>
otherwise, depending on the size of the file, there might several layers of buffering from sending, to the network, to receiving that might happen all together
<daviddias>
in a local env
<dignifiedquire>
daviddias: how would that help us ensuring that res without buffering first?
<dignifiedquire>
*is returned
<daviddias>
because you could make the mocked API only send one chunk first
bret-raspi has joined #ipfs
<daviddias>
and only send the rest, after you confirm you read the first one
<dignifiedquire>
right
<daviddias>
dignifiedquire: did you had the chance to try the #125 on the browser?
<dignifiedquire>
daviddias: all browser tests are passing with #125
<dignifiedquire>
also made some manual tests with the minified versions to ensure the export is correct
<dignifiedquire>
just run gulp test:browser and see
<daviddias>
oh, that is extraordinary, I remember that xicombd was having some problems at first WebPacking js-ipfs-api when we made ipfs-chrome-station
<daviddias>
awesome
<ipfsbot>
[js-ipfs-api] diasdavid pushed 3 new commits to master: http://git.io/v4M1n
<ipfsbot>
js-ipfs-api/master c2a6ad3 dignifiedquire: refactor: Switch to webpack and wreck...
<ipfsbot>
js-ipfs-api/master a1e2c39 David Dias: Merge pull request #125 from Dignifiedquire/webpack...
<dignifiedquire>
daviddias: yeah webpack doesn’t put fake stuff in to your package like browserify by default so I had to pull a couple of shims manually in, but that makes the resulting bundle smaller because I only add what I need: https://github.com/ipfs/js-ipfs-api/blob/master/tasks/config.js#L12-L14
<haadcode>
excellent daviddias dignifiedquire
<dignifiedquire>
haadcode: please test it until it breaks and tell us :)
<haadcode>
will do as soon as it is in npm ;)
<daviddias>
ahaha fake stuff :P
amstocker__ has quit [Ping timeout: 246 seconds]
<daviddias>
dignifiedquire: on a clean install, `gulp test:browser` is erroring for me
<dignifiedquire>
what’s the error?
<daviddias>
it's big
<daviddias>
ERROR in ./~/boom/lib/index.js
<daviddias>
Module not found: Error: Cannot resolve module 'stream-http' in /Users/david/Documents/code/ipfs/js-ipfs-api/node_modules/boom/lib
<daviddias>
@ ./~/boom/lib/index.js 5:13-28
<daviddias>
but errors around the lack of stream-http
<dignifiedquire>
did you run “npm install”?
<dignifiedquire>
argh damn sorry
<dignifiedquire>
missing deps in package.json
<daviddias>
ah, so stream-http is just a shim? cause running on Node.js, ran just fine
<dignifiedquire>
please run “npm install —save-dev stream-http browserify-https"
<dignifiedquire>
yes they are the shims for http and https in the browser (the same browserify uses)
<daviddias>
it was just stream-http, but it works, sweet! :D
<dignifiedquire>
:)
<daviddias>
this is a big day! :D super happy that browser tests now pass
<ipfsbot>
[js-ipfs-api] diasdavid tagged v2.8.0 at d58cee3: http://git.io/v4MHk
<haadcode>
dignifiedquire: fixed it. there's a bug there somewhere it seems :) thanks for testing!
<achin>
take a look at `ipfs object patch`
<achin>
cow_2001: you can use "ipfs object patch HASH1 add-link dirname HASH2" to create a copy of hash1 that includes a link named "dirname" to hash2
<cow_2001>
what happens if names collide?
elima_ has joined #ipfs
<achin>
the old link is removed
e-lima has quit [Ping timeout: 240 seconds]
kanzure has joined #ipfs
xicombd has quit [Ping timeout: 250 seconds]
nskelsey has quit [Ping timeout: 250 seconds]
zmanian_ has quit [Ping timeout: 246 seconds]
true_droid has quit [Ping timeout: 264 seconds]
cemerick has quit [Ping timeout: 250 seconds]
nskelsey has joined #ipfs
true_droid has joined #ipfs
zmanian_ has joined #ipfs
xicombd has joined #ipfs
hoony has joined #ipfs
Hsi has quit [Killed (Sigyn (Spam is off topic on freenode.))]
<fazo>
cow_2001: achin: if my understanding is correct, in go-ipfs 0.4 the files api will allow posix-style file management commands to be executed on ipfs objects
<fazo>
like rm, mkdir, mv etc
bipeltate has joined #ipfs
ashark has joined #ipfs
<cow_2001>
woah.
sstangl has quit [Ping timeout: 250 seconds]
sstangl has joined #ipfs
NightRa has joined #ipfs
wking has quit [Ping timeout: 260 seconds]
wking has joined #ipfs
jabberwocky has joined #ipfs
cemerick has joined #ipfs
jcdietrich has joined #ipfs
jabberwocky has quit [Ping timeout: 244 seconds]
koo8 has joined #ipfs
koo8 has left #ipfs ["Ragequit"]
Encrypt has joined #ipfs
<achin>
fazo: neato
hoony has quit [Remote host closed the connection]
voxelot has joined #ipfs
voxelot has joined #ipfs
fazo has quit [Ping timeout: 246 seconds]
ebarch has quit [Quit: Gone]
mildred has quit [Ping timeout: 240 seconds]
NeoTeo has quit [Quit: ZZZzzz…]
ebarch has joined #ipfs
amade has joined #ipfs
M-erikj is now known as erikj`
ricmoo has quit [Ping timeout: 246 seconds]
roguism has joined #ipfs
roguism1 has quit [Ping timeout: 244 seconds]
border0464 has quit [Quit: sinked]
Senji has joined #ipfs
border0464 has joined #ipfs
besenwesen has quit [Quit: ☠]
besenwesen has joined #ipfs
besenwesen has quit [Changing host]
besenwesen has joined #ipfs
jabberwocky has joined #ipfs
border0464 has quit [Ping timeout: 252 seconds]
border0464 has joined #ipfs
<whyrusleeping>
davidar: sure thing
<multivac>
whyrusleeping: 2015-11-19 - 12:30:29 <daviddias> tell whyrusleeping can you give me admin for the webui and push rights to dignifiedquire ?
<whyrusleeping>
er
<whyrusleeping>
daviddias: sure thing
jabberwocky has quit [Ping timeout: 272 seconds]
<whyrusleeping>
you guys need to pick easier to tab complete names
<ion>
_whitelogger: good point
<ion>
er
<whyrusleeping>
lol
forth has joined #ipfs
flyingkiwi has quit [Quit: bye]
flyingkiwi has joined #ipfs
<voxelot>
daviddias: whyrusleeping: everyone: o/
bipeltate has quit [Killed (Sigyn (Spam is off topic on freenode.))]
<whyrusleeping>
voxelot: heyo
Rajarshi has joined #ipfs
<voxelot>
how ya doing? been mia a bit, london kicked my ass
<whyrusleeping>
doin pretty good, trying to kick 0.3.10 out the door so i can do the same to 0.4.0
<whyrusleeping>
if you want to help out, you can try ipfs-update out for a spin
<voxelot>
haha nice, whats the current build? i'm still 0.3.8-dev
<voxelot>
something about upgrading my go had me stop building 0.3.9
<voxelot>
<- lazy , just upgraded go
<whyrusleeping>
the current build is 0.3.10-dev
<whyrusleeping>
0.3.9 is the latest release
<whyrusleeping>
but the update tool is pretty nice, if i do say so myself :)
<whyrusleeping>
(i actually use it a lot now for my own dev work)
<voxelot>
cool so how do i test, do a go get and then use the command?
<daviddias>
seems you factored out geoip, sounds good :)
<6JTACIPBU>
[webui] diasdavid pushed 2 new commits to master: http://git.io/v494u
<6JTACIPBU>
webui/master a00fc44 dignifiedquire: Use lookupPretty from ipfs-geoip
<6JTACIPBU>
webui/master 0c15f71 David Dias: Merge pull request #93 from Dignifiedquire/geoip...
<7YUAAC0N3>
[webui] diasdavid closed pull request #93: Use lookupPretty from ipfs-geoip (master...geoip) http://git.io/v8wEs
<whyrusleeping>
ipfsbot, get your shit together
dignifiedquire_ has quit [Quit: dignifiedquire_]
chriscool has quit [Ping timeout: 240 seconds]
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/kbucket-fill from df0be33 to 06bc124: http://git.io/v49uw
<ipfsbot>
go-ipfs/fix/kbucket-fill 06bc124 Jeromy: if bucket doesnt have enough peers, grab more elsewhere...
<daviddias>
whyrusleeping: ahaha
Encrypt has quit [Quit: Quitte]
<ipfsbot>
[go-ipfs] whyrusleeping created fix/record-accounting (+1 new commit): http://git.io/v49aR
<ipfsbot>
go-ipfs/fix/record-accounting 81645b9 Jeromy: send record fixes to peers who send outdated records...
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #1977: send record fixes to peers who send outdated records (dev0.4.0...fix/record-accounting) http://git.io/v49VV
<jcdietrich>
master8787: when I got the m3u it was wrapped in a json blob and had newlines encoded. Once I stripped the envelope and replaced the \ns with real newlines, BINGO!
<jcdietrich>
am i doing something wrong that is giving me the json wrap?
domanic has quit [Ping timeout: 272 seconds]
<ion>
whyrusleeping: Thanks, I didn't realize they have songs like that.
<dignifiedquire>
whyrusleeping: yes it would be
<whyrusleeping>
ion: yeah, soad has a few of them like that
<master8787>
jcdietrich: thank you :-) i have no problem with ipfsbin, just open link inside web browser an copy-paste to new m3u file
compleatang has joined #ipfs
<jcdietrich>
master8787: ok. i was trying to actually ipfs cat the file, but ipfsbin must be wrapping it. BTW i can only get the first song.. others fail with
<dignifiedquire>
I thought it might be ndjson, but it has line delimiters everyhwere so ndjson can’t parse it, but it’s missing proper array notation to be valid json as well
<whyrusleeping>
dignifiedquire: okay, i can PR a fix for that then
<dignifiedquire>
whyrusleeping: that would be great, also it would be great if there could be a header distinction between json, ndjson and plain text, at the moment I have throw parsers at it and see what sticks
<dignifiedquire>
whyrusleeping: do you want me to file issues re ndjson?
<dignifiedquire>
jbenet: when you have a moment could you enlighten me on the current release process for the webui please
<whyrusleeping>
dignifiedquire: yeah, go ahead
<whyrusleeping>
dignifiedquire: also, re: header to specify json vs ndjson
<whyrusleeping>
if you send the stream-channels=true, it will be ndjson
<whyrusleeping>
otherwise json
<whyrusleeping>
(as long as content-type == json)
<jbenet>
dignifiedquire: make publish, then copy the hash into go-ipfs's core/corehttp/webui.go
<dignifiedquire>
whyrusleeping: okay that sounds good, so only ndjson format needs to be fixed
<jbenet>
oh and pin it.
mg- has quit [Ping timeout: 246 seconds]
<dignifiedquire>
jbenet: I see, so we can only update the webui with an ipfs release?
<whyrusleeping>
we could use ipns
<whyrusleeping>
via the dns entry thing
<whyrusleeping>
dnslink=/ipfs/<webuihash>
<dignifiedquire>
that would be nice, but we still should have something like a version, because the webui for 0.4 is probably incompat with 0.3
<whyrusleeping>
mm, right
<dignifiedquire>
so maybe an ipns entry like webui-0.3, and then for 0.4 we have webui-0.4 ?
<tperson>
This is how I would do it. Each release has an officially support release of the webui (this is like the hash that is included in the build.
<tperson>
Then you can publish updates through a trusted ipns entry
<tperson>
Version it like tags in docker
<tperson>
node:4 => v4.2
<whyrusleeping>
/ipns/webui.ipfs.io/v0.4.0
<tperson>
node:4.2 => v4.2
<jbenet>
i havent proven to myself ipns republishing works perfectly. what we should do is have a bot that queries a list of ipns names every few hrs to check for resolution. if it ever fails it logs that.
<tperson>
node:4.1 => v4.1
<whyrusleeping>
jbenet: thats why i said use the dnslink
<jbenet>
ah ok right
<jbenet>
yeah dns works fine
<whyrusleeping>
i think thats a very viable solution
<jbenet>
though--- the one problem is that dns lookup has a ttl
<jbenet>
so it doesnt work offline
<whyrusleeping>
mmm, right
<whyrusleeping>
as would the ipns lookup
<jbenet>
ancestry based ipns is TRTTD. (no expiry)
<whyrusleeping>
we could cache the webui version on first load
<whyrusleeping>
and have an explicit 'update' button for it
<tperson>
Which is when you'd call back to the published hash inside of the release binary
<dignifiedquire>
could we do something like use the dns resolution to fetch it and then pin it locally
<tperson>
fall back*
<dignifiedquire>
or could we use the update functionality you have built for 0.3.10 to update the webui independently of the go version, something like `ipfs update webui` ?
devbug has quit [Read error: Connection reset by peer]
devbug has joined #ipfs
ebarch has quit [Quit: Gone]
devbug has quit [Remote host closed the connection]
<lgierth>
whyrusleeping: pong
devbug has joined #ipfs
master8787 has quit [Quit: Leaving]
ebarch has joined #ipfs
<HoboPrimate>
hm, I don't have a defined question about it yet. Just some uneasy feeling of the effectiveness of blacklisting files in content adressable network like ipfs.
<whyrusleeping>
lgierth: could i get two things from you, i forgot the first one. but the second one was something like 'ipfs.io/motd'
ketohexose has quit [Remote host closed the connection]
<tperson>
Still probably need that API testing though
<dignifiedquire>
yes
<ion>
Now I’m curious, which four letters do the asterisks stand for?
<dignifiedquire>
u u c k
<tperson>
Fudge
<dignifiedquire>
this can be extended depending on the need
<dignifiedquire>
F********* for very bad cases
<tperson>
Fudgesicles
mg- has joined #ipfs
<dignifiedquire>
:D
<ion>
Fullerenes
<lgierth>
whyrusleeping: sure we can just have that point to an ipfs hash, or domain with TXT link
<lgierth>
same as what we do with ipfs.io/refs
<jbenet>
hey any ipfsers in Palo Alto or SF tonight? noffle and i are grabbing dinner in palo alto and hacking at coupa after.
Matoro has joined #ipfs
Tv` has joined #ipfs
<whyrusleeping>
lgierth: cool, do i have access to dns stuff?
<lgierth>
whyrusleeping: yeah generate yourself a digitalocean api token, and have a look at how ipfs/refs.git uses the dnslink-deploy
<whyrusleeping>
ah, cool
<lgierth>
whyrusleeping: so first thing is e.g. motd.ipfs.io, then we can mount that at /motd in nginx
<lgierth>
*or* you maybe wanna use ipns with dns? :)
<lgierth>
i haven't figured out how to deploy that neatly, yet
<whyrusleeping>
yeah, ipns would be cool to use
<lgierth>
probably need to either share the key, or have a host for deploys
<whyrusleeping>
and this is a good candidate, especially since the motd isnt 'super critical'
<lgierth>
in that case you can just omit the dns dance
<lgierth>
and have nginx rewrite ipfs.io/motd to ipfs.io/ipns/Qmfoobar
<lgierth>
i meant "ipns without dns" above ^
<whyrusleeping>
mmkay, i'll plan on that then
<whyrusleeping>
how should we manage the publishing node?
<whyrusleeping>
should it be one of the gateways?
<lgierth>
whyrusleeping: i'd say take deimos -- it only runs pinbot so far and is supposed to be kind of a utils host
<lgierth>
ideally we can say IPFS_PATH=/opt/sites/motd ipfs publish or something like that
<whyrusleeping>
cool cool, my ssh key is on it, yeah?
<lgierth>
yep!
<lgierth>
for root at least
devbug has quit [Ping timeout: 255 seconds]
gatesvp has joined #ipfs
Guest55877 has joined #ipfs
<gatesvp>
@jbenet I'm halfway between those two right now, sadly I already have company coming over for dinner... of course, if you can make it to San Mateo around dinner, I can get offer a free home-cooked meal :)
<tperson>
Ah interesting, so you have to expose the headers?
<tperson>
lol
<dignifiedquire>
you have to expose the headers, through a header..
* dignifiedquire
bangs head agains the wall
<whyrusleeping>
glhf
<tperson>
Easy fix at least
<whyrusleeping>
dignifiedquire: you can set your own headers in the config
<whyrusleeping>
see 'ipfs daemon --help'
<dignifiedquire>
I see
<dignifiedquire>
will try and use that
devbug has joined #ipfs
<dignifiedquire>
whyrusleeping: how are nested config values set trough the api?
<dignifiedquire>
can I use dot notation in the key arg?
elima_ has quit [Ping timeout: 255 seconds]
<whyrusleeping>
yep
<dignifiedquire>
ta
<whyrusleeping>
(see ipfs daemon --help)
<dignifiedquire>
(looked at it but wasn’t sure)
devbug has quit [Ping timeout: 240 seconds]
<dignifiedquire>
daviddias: are you around?
* daviddias
present
<dignifiedquire>
daviddias: great need your help for fixing the header issues, I can’t figure out how to set API.HTTPHeaders.Access-Control-Allow-Headers = [‘x-stream-output’, ‘x-chunked-output’] via the api
<daviddias>
Why would access control headers be those in specific ?
<dignifiedquire>
because of the work I’ve been doing for the past hours: API.HTTPHeaders.Access-Control-Allow-Headers
<daviddias>
But using the API to do that, you can check the gulp task on js-ipfs-api where the daemons were spawned ( sorry for not sending direct link, I'm on the phone)
<dignifiedquire>
ahh found it thanks, will try that
<dignifiedquire>
the thing is that we need to get these headers in even if not in tests, not sure if they should be set by go-ipfs itself by default or we set them on startup
ygrek has quit [Ping timeout: 265 seconds]
<daviddias>
The fetch API expects these kinds of headers inside the header for CORS? That is a new one to me
elima_ has joined #ipfs
<dignifiedquire>
it was to me as well..but I checked against two browsers and the only headers we can access are “content-type”
<daviddias>
Well, stream and chunks should not be defined by config
<daviddias>
They can be passed through query string
<daviddias>
As a param to tell if we want thins streaming
<dignifiedquire>
then why are we checking the response headers for that?
<daviddias>
go-ipfs is making the decision
<daviddias>
We check to see if go-IPFS is going to send us a bunch of thinks that are all parts of the same blob ( chunked ), or if it is going to send a bunch of things that are independent ( streamed )
<daviddias>
A big file will come chunked
<daviddias>
A big list will come streamed
<daviddias>
(There might be bugs here, but this should be our expectations)
<dignifiedquire>
daviddias: yes that’s why we need access to those headers
<dignifiedquire>
but we can’t acces those headers, unless go sends us the correct ones
<daviddias>
When you mean access, you mean configure them ?
<daviddias>
Or read them on the response ?
<dignifiedquire>
the response that go-ipfs sends needs to include the correct access-control-allow-headers header otherwise we can not access x-stream-output and x-chunked-output headers in the browser
<daviddias>
Oh I see
<daviddias>
Thank for your patience and explaining it again :)
<daviddias>
All right, so that should be part of go-ipfs
<daviddias>
We should have to tel the API everything that we need access to the thing that will tell us how to treat the data it sends in the first place
<daviddias>
That can be added directly on the IPFS config object
<daviddias>
Which is JSON
<daviddias>
And PR'ed to go-ipfs
<dignifiedquire>
ok
<daviddias>
so that is why everyone was seeing different types in the browser
<daviddias>
That assertion always failed and it thought it would be a single blob
Encrypt has quit [Quit: Quitte]
<daviddias>
Probably even truncating things
elima_ has quit [Ping timeout: 250 seconds]
<daviddias>
whyrusleeping: check these details for the API ^^ ( also it is said that using x- is not a good practice anymore, because makes it hard to migrate )
* dignifiedquire
<dignifiedquire>
I’m setting the correct headers but fetch is still not working
<daviddias>
Oh, but can you read them now ??
<dignifiedquire>
yeeeeeeesssss
<dignifiedquire>
I can read them
<dignifiedquire>
and tests are passing
<dignifiedquire>
fuck web security
mg- has quit [Ping timeout: 246 seconds]
reit has joined #ipfs
mg- has joined #ipfs
patcon has joined #ipfs
<ipfsbot>
[js-ipfs-api] Dignifiedquire opened pull request #128: [WIP] Header work (master...headers) http://git.io/v4HFH
<lgierth>
the yml you posted looks great. building these dir structures is annoying :)
patcon has quit [Ping timeout: 255 seconds]
<daviddias>
woooo! :D
<daviddias>
tell me tell me tell me :)
<ipfsbot>
[go-ipfs] whyrusleeping created fix/gateway-close-notif (+1 new commit): http://git.io/v4Qvg
<ipfsbot>
go-ipfs/fix/gateway-close-notif cb56ec1 Jeromy: add closenotify and large timeout to gateway...
<jbenet>
lgierth: yeah this part of why IPLD is happening
deltab has quit [Ping timeout: 260 seconds]
<ipfsbot>
[go-ipfs] Dignifiedquire opened pull request #1979: Add correct access control headers to the default api config (master...fix/access-controll-headers) http://git.io/v4Qv5
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #1980: add closenotify and large timeout to gateway (dev0.4.0...fix/gateway-close-notif) http://git.io/v4QfL
<daviddias>
it is a way to avoid having to find where the bash script to sign the commits is :P
<daviddias>
otherwise git-cop is going to get you :P
deltab has joined #ipfs
<drathir>
lol 00:36 <+daviddias> otherwise git-cop is going to get you :P
patcon has joined #ipfs
* drathir
was catched in past ;p
<ipfsbot>
[go-ipfs] sahib opened pull request #1981: --help: Add a note on using IPFS_PATH to the footer of the helptext. (master...master) http://git.io/v4QJi
<dignifiedquire>
whyrusleeping: I don’t understand the gofmt warning :(
<dignifiedquire>
oh now I get it, trailing commas are a must Oo
<dignifiedquire>
whyrusleeping: should be good now
<whyrusleeping>
dignifiedquire: cool, the code LGTM. but the http stuff will need either daviddias mappum or jbenet to look over
<dignifiedquire>
whyrusleeping: sure thing, as long as we can get this into 0.3.10 I’m happy
<whyrusleeping>
and if the PR were any more involved than that, i would ask you to rebase and PR against 0.4.0
<whyrusleeping>
but its pretty small
<dignifiedquire>
or is 3.10 already frozen?
<whyrusleeping>
it is, for the most part
<whyrusleeping>
i'll let this one slide so youre not blocked until 0.4.0 though
<dignifiedquire>
thanks!
devbug has joined #ipfs
devbug has quit [Remote host closed the connection]
devbug has joined #ipfs
<lgierth>
can someone try seccat and see if it works for you?
patcon has quit [Ping timeout: 264 seconds]
<lgierth>
i'm getting a panic from context.propagateCancel during secio.SessionGenerator.NewSession
<dignifiedquire>
alright I’m out good night everyone
<lgierth>
on both ends, which makes it look like a code bug
* dignifiedquire
falls to bed like a stone
<whyrusleeping>
lgierth: thats likely possible
dignifiedquire has quit [Quit: dignifiedquire]
<whyrusleeping>
gimme a sec to finish something and i'll try it out
<lgierth>
might be the same issue as i had yesterday (the initial 4 bytes in the handshake), but there i was passing context.TODO() so there was nothing to propagate
<drathir>
whyrusleeping: hi... looks like development all time forward...
<whyrusleeping>
drathir: just pressing on, slingin code, and fixin bugs :)
<drathir>
thats good ofc... good that project all time active...