<achin>
collisions are possible, but so unlikely that no one will ever see a collision
<jbenet>
not until a hsh function breaks
<jbenet>
hash*
pjz has quit [Quit: leaving]
<vijayee>
is there a node-ipfs-dht?
<ralphtheninja>
jbenet: what do you mean "we vendor ours"?
<ralphtheninja>
jbenet: as in you want to manage the dependencies yourself rather than relying on go's system for doing that?
<ralphtheninja>
sorry, just some english I'm not used to
gaffa has quit [Ping timeout: 264 seconds]
<ralphtheninja>
so bootstrapping ipfs dependencies with ipfs itself :)
<ralphtheninja>
ipfs is all about meta :)
O47m341 has joined #ipfs
NightRa has quit [Quit: Connection closed for inactivity]
patcon has quit [Ping timeout: 276 seconds]
<jbenet>
ralphtheninja: go's approach to deps is "pull latest from master". it's a huge contentious issue in the community. it's gotten better in the last year or so, with common approaches to backing up and vendoring code, but there's no proper npm-like thing yet. it's just not in the core go team culture. we care about permanence and shit not braking so we _copy_
<jbenet>
the code. the go-ipfs codebase has all our deps in "Godeps/" dir. this will shorlty go away and be replaced by ipfs links with gx.
JasonWoof has quit [Ping timeout: 264 seconds]
<ralphtheninja>
jbenet: gotcha .. pulling latest from master sounds crazy
<ralphtheninja>
unless you are developing something and always want the latest
<ralphtheninja>
which isn't always true while developing either
<jbenet>
yeah its insanity in OSS. it works when you control all the code + can update all calling points (how google works) , but it doesnt work when you have a very granular access community with code all over the place, silent users, and so on.
danielrf2 has joined #ipfs
danielrf1 has quit [Ping timeout: 260 seconds]
se3000 has quit [Max SendQ exceeded]
nycoliver has joined #ipfs
<ralphtheninja>
jbenet: TIL about vendoring code, wasn't familiar with the term before
<ralphtheninja>
although familiar with the concept :)
<computerfreak>
4.o release soon?
<ralphtheninja>
jbenet: how are we doing with tutorials and explaining base use cases etc?
<jbenet>
ralphtheninja: terrible :( we need much much better docs
<ralphtheninja>
jbenet: I guess documentation in general, both for beginners and advanced users
<ralphtheninja>
jbenet: one thing I thought about when visiting the ipfs org and seeing all the repos was that it would be really nice with some sort of map over how all the parts fit together
<ralphtheninja>
like a site map but for repos and projects
<jbenet>
github.com/ipfs/ipfs has a bit of a directory, but yes a more visual thing would be nice
<zignig>
good to see the 0.4 swarm is picking up... ;)
<ralphtheninja>
jbenet: do we have any release system to automate things like this?
<whyrusleeping>
jbenet: when did you add that to the checklist?
<whyrusleeping>
i don't see it...
<ralphtheninja>
just now ^
<zignig>
ralphtheninja: I use a cronjob to re-add a folder and publish every <INTERVAL>
<ralphtheninja>
zignig: you mean as in adding content?
<whyrusleeping>
ah, okay
<whyrusleeping>
well now i won't forget!
<computerfreak>
wait, go-ipfs over npm is a full ipfs daemon?
<zignig>
ralphtheninja: yes. The other trick that works but needs a rewrite is to check an IPNS record from a specific node and pin it every time it changes.
<zignig>
a git commit hook would work as well.
<ralphtheninja>
zignig: oh okay, I meant if there was any system for releasing ipfs
* zignig
is investigating using caddy ( caddyserver.com ) as a proxy for ipfs.
<ralphtheninja>
zignig: like a build system or similar that does stuff automatically, rather than someone following a checklist
<zignig>
ralphtheninja: sort of , go builder is making binaries for all platforms, I don't think it auto publishes yet.
<ralphtheninja>
zignig: ✔
* ralphtheninja
loves feeling like a newb
<zignig>
there is an automated system that will build ipfs from source and create ipfs packages automatically.
<zignig>
distribution via ipfs is still in the works, as I understand (jbenet?)
<ralphtheninja>
nods, I noticed that today
anticore has quit [Ping timeout: 240 seconds]
<zignig>
jbenet: got ipfs rkt working , may want to investigate adding to the ipfs.io site as a adjunct to the docker.
<zignig>
ralphtheninja: what is your use case for ipfs ?
<jbenet>
zignig: we're getting there. gobuilder works so nicely that we havent needed to, but we really should stop using gobuilder's bandwidht. also we'll get to building individually to have developer built + signed releases. i think the new distributions repo is all setup with the first part of this (building + publushing with ipfs)
<ralphtheninja>
zignig: no use case yet .. I want to work on it somehow though
<zignig>
jbenet: nice, auto build and sign is a great step forward.
hellertime has quit [Quit: Leaving.]
lidel has quit [Ping timeout: 246 seconds]
nekomune has quit [Ping timeout: 246 seconds]
Score_Under has quit [Ping timeout: 246 seconds]
lidel` has joined #ipfs
Score_Under has joined #ipfs
Confiks_ has quit [Ping timeout: 246 seconds]
Confiks has joined #ipfs
voxelot has joined #ipfs
lidel` is now known as lidel
nekomune has joined #ipfs
<computerfreak>
is there a comand for saving the hole folder which is linked to ipns hash?
<zignig>
computerfreak: get the folder into you local node or save it as a folder to disk ?
<computerfreak>
well, howto save folder on disk?
<zignig>
computerfreak: ipfs get -o=<foldername> /ipns/Qmblahblahblah
<computerfreak>
wow we have get :D
<computerfreak>
didnt knew
<zignig>
ipfs commands , is your friend.
<computerfreak>
tell me no namesys on ipfsnode ..
<zignig>
does ipfs ls /ipns/QmBlah give you anything ?
<computerfreak>
same error
<zignig>
the name does not exist then ( or you are not online)
<computerfreak>
ah need daemon running i guess ..
<computerfreak>
thx
hoony has quit [Ping timeout: 276 seconds]
<achin>
richardlitt++
<M-mubot>
richardlitt has 1 point
<achin>
oooOOo
<zignig>
zignig--
<M-mubot>
zignig has -1 points
<achin>
nooooo
<achin>
achin += 200
<M-davidar>
mubot++
<M-mubot>
mubot has 1 point
hoony has joined #ipfs
_rht has joined #ipfs
nycoliver has quit [Quit: Lost terminal]
O47m341 has quit [Ping timeout: 240 seconds]
patcon has joined #ipfs
Senji has joined #ipfs
patcon has quit [Ping timeout: 256 seconds]
JasonWoof has joined #ipfs
Matoro has quit [Read error: Connection reset by peer]
Matoro has joined #ipfs
O47m341 has joined #ipfs
Matoro has quit [Read error: Connection reset by peer]
Matoro has joined #ipfs
ygrek has quit [Ping timeout: 264 seconds]
joshbuddy has joined #ipfs
akkad has quit [Excess Flood]
akkad has joined #ipfs
chris6131 has quit [Read error: Connection reset by peer]
<ipfsbot>
js-ipfs-api/greenkeeper-babel-plugin-transform-runtime-6.4.3 96f2998 greenkeeperio-bot: chore(package): update babel-plugin-transform-runtime to version 6.4.3...
Akaibu has quit [Quit: Connection closed for inactivity]
kvda has joined #ipfs
reit has quit [Ping timeout: 255 seconds]
jfred has quit [Ping timeout: 265 seconds]
ulrichard has joined #ipfs
patcon has joined #ipfs
kvda has quit [Ping timeout: 272 seconds]
mildred has joined #ipfs
reit has joined #ipfs
tlevine has quit [Read error: Connection reset by peer]
doesntgolf has quit [Ping timeout: 252 seconds]
chriscool has joined #ipfs
e-lima has joined #ipfs
nicolagreco has quit [Remote host closed the connection]
computerfreak has quit [Ping timeout: 245 seconds]
computerfreak has joined #ipfs
zz_r04r is now known as r04r
cryptix has joined #ipfs
disgusting_wall has quit [Quit: Connection closed for inactivity]
ylp1 has joined #ipfs
cryptix has quit [Ping timeout: 250 seconds]
simonv3 has quit [Quit: Connection closed for inactivity]
patcon has quit [Ping timeout: 276 seconds]
joshbuddy has quit [Quit: joshbuddy]
Senji has quit [Ping timeout: 265 seconds]
jaboja64 has joined #ipfs
Senji has joined #ipfs
joshbuddy has joined #ipfs
<ipfsbot>
[webui] greenkeeperio-bot opened pull request #204: Update i18next-xhr-backend to version 0.3.0
O47m341 has quit [Ping timeout: 240 seconds]
overdangle has quit [Ping timeout: 255 seconds]
Tv` has quit [Quit: Connection closed for inactivity]
Johnnie has joined #ipfs
r04r is now known as zz_r04r
O47m341 has joined #ipfs
rendar has joined #ipfs
a1uz10nn has joined #ipfs
<a1uz10nn>
yo
zz_r04r is now known as r04r
<dignifiedquire>
good morning everyone
voxelot has quit [Ping timeout: 260 seconds]
Encrypt has joined #ipfs
gunn_ has quit [Read error: Connection reset by peer]
s_kunk has joined #ipfs
s_kunk has quit [Changing host]
s_kunk has joined #ipfs
gunn has joined #ipfs
cemerick has joined #ipfs
hoony has quit [Remote host closed the connection]
yellowsir has joined #ipfs
KatzZ has joined #ipfs
joshbuddy has quit [Quit: joshbuddy]
a1uz10nn has quit [Ping timeout: 250 seconds]
<chriscool>
Hi everyone!
<ansuz>
salut!
<chriscool>
Salut ansuz!
<chriscool>
whyrusleeping daviddias: how is Lisbon?
jgraef has joined #ipfs
<chriscool>
whyrusleeping: is there a good way to convert a path to a key in go-ipfs?
<jgraef>
Are there any specs for unixfs available?
<ralphtheninja>
dignifiedquire: moin
<dignifiedquire>
moin ralphtheninja
<jgraef>
It appears to me that the Size field in mdag links is only there for unixfs. Looks not very elegant to me. Wouldn't it be better to store the file size in the file block itself? Or even better, allow storing arbitrary data in links?
_rht has quit [Quit: Connection closed for inactivity]
yellowsir has quit [Quit: Leaving.]
computerfreak has quit [Quit: Leaving.]
<yellowsir1>
@whyrusleeping today i'm getting no resoults for my ipns record, also i use ls /ipns/domain or ipns/<peerid> but i chanced nothing since yesterday
<yellowsir1>
could the problem be that the comuter i'm publishing on uses ipfs version 0.4.0-dev?
<yellowsir1>
but i'm getting the right peerid when using `ipfs name resolve domain -r` , which didn't work yesterday :O
<yellowsir1>
*btw ipfs name resolve returns the ipfs hash not the peerid, but this is also fine :)
Senji has quit [Ping timeout: 250 seconds]
jaboja64 has joined #ipfs
andoma has quit [Ping timeout: 240 seconds]
andoma has joined #ipfs
gaffa has joined #ipfs
jaboja64 has quit [Ping timeout: 276 seconds]
rombou has joined #ipfs
Encrypt has quit [Quit: Quitte]
hellertime has joined #ipfs
O47m341 has quit [Ping timeout: 256 seconds]
<Sleep_Walker>
other nodes takes content only when they want to access it? how long do they keep it? can they select what to keep and what not?
<patagonicus>
Sleep_Walker: A node can pin stuff so that it will be kept. Otherwise it is kept until a garbage collection is done (ipfs repo gc)
<Sleep_Walker>
patagonicus: thanks
<Sleep_Walker>
is there some status page comparing implementations of ipfs? how it is feature complete?
reit has quit [Ping timeout: 255 seconds]
<Ape>
Sleep_Walker: go-ipfs is the leading project and even it is under heavy development still
<Ape>
You can see how well seeking the video works etc.
<Sleep_Walker>
and what are requirements to work behind NAT?
<Sleep_Walker>
(like ports to forward?)
<Ape>
You might want to forward TCP4001
O47m341 has joined #ipfs
<Sleep_Walker>
I'll try it with SSH port forwarding, I hope that the fact that external IP address is different for incomming packets and for outgoing packets is not important
<ipfsbot>
go-ipfs/feat/multi-multiaddr-bootstrap 1649b44 Jeromy: combine multiple bootstrap addrs into single peer info...
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #2199: combine multiple bootstrap addrs into single peer info (master...feat/multi-multiaddr-bootstrap) https://github.com/ipfs/go-ipfs/pull/2199
jfred has joined #ipfs
se3000 has joined #ipfs
Guest25__ has joined #ipfs
hellertime has quit [Ping timeout: 264 seconds]
svetter has joined #ipfs
hellertime has joined #ipfs
rombou has joined #ipfs
tmg has quit [Ping timeout: 272 seconds]
anticore has joined #ipfs
chriscool has joined #ipfs
vijayee has joined #ipfs
<hannes3>
whyrusleeping: ~20tb of books ;)
<whyrusleeping>
hannes3: oh wow, thats a lot of books
<brimstone>
is there a way to add a file to a directory hash, without pulling the directory out, adding the file, and pushing it a back in?
<Kubuxu>
brimstone: in 0.4 yes
<Kubuxu>
With Files API.
<brimstone>
Kubuxu: thanks, 0.4 is soon right?
<Kubuxu>
It's in development. Current 0.3.11 is supposed to be last 0.3
grahamperrin has joined #ipfs
mildred has quit [Ping timeout: 260 seconds]
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
ulrichard has quit [Quit: Ex-Chat]
ylp1 has quit [Quit: Leaving.]
voxelot has joined #ipfs
rombou has quit [Ping timeout: 272 seconds]
Matoro has quit [Remote host closed the connection]
<Kubuxu>
drathir: I use `ipfs pin ls | cut -d' ' -f 1 | xargs ipfs ls`
<Kubuxu>
This way I have any idea what is going on
<whyrusleeping>
dignifiedquire: what node is that on?
<dignifiedquire>
biham
<voxelot>
drathir: i started working on a python script once that checks your repo and spits out links to everything cached on a node in a web browser, could be built out in the future to make quick deletes with the api
<dignifiedquire>
I'm running latest master (built a couple of hours ago)
<whyrusleeping>
dignifiedquire: running your own node on biham?
<drathir>
nice to be added into ls as option even...
<dignifiedquire>
whyrusleeping: no just my own client to connect to the node inside the container (as lgierth told me to do
<whyrusleeping>
do docker processes show up in ps aux ?
<dignifiedquire>
whyrusleeping: yes
<whyrusleeping>
ah, okay
<whyrusleeping>
thats weird
<dignifiedquire>
though "docker ps" fails
<whyrusleeping>
dignifiedquire: why did the node on biham get restarted?
<drathir>
also in future good to have somethin like realtime connection top.... per file if possible would be great too...
<dignifiedquire>
whyrusleeping: restart?
<dignifiedquire>
there was no restart, my ssh connection never broke
<whyrusleeping>
someone killed the ipfs node on biham
<whyrusleeping>
like, half an hour ago it seems
<dignifiedquire>
not me
<drathir>
that could help to manage the load at servers where is traffic too high...
<whyrusleeping>
and restarted it
<dignifiedquire>
no idea, maybe it was auto restarted because of too much memory or sth? or it's just docker being docker
<daviddias>
dignifiedquire: he means the Docker container which runs biham, not the actual vm
grahamperrin has left #ipfs ["Leaving"]
<drathir>
good console simple ui is sometimes better than full gui...
<dignifiedquire>
lgierth: did you restart ipfs on biham half an hour ago?
<drathir>
voxelot: nice, like to see how that looks when You end and relase it....
<dignifiedquire>
whyrusleeping: okay so it sounds like this issue is due to the docker container/ipfs daemon being restarted, but no idea why that happens so frequently (at least twice today)
<whyrusleeping>
well i'm gonna run a gc on biham
<whyrusleeping>
i have no idea who is doing stuff on it
<whyrusleeping>
so i hope whoever was using up 14TB has their stuff pinned
<dignifiedquire>
but then all the stuff pulled via ref gets deleted :(
<whyrusleeping>
i hate running infrastructure in docker containers...
<whyrusleeping>
i have no idea whats going on 90% of the time
amstocker has joined #ipfs
<dignifiedquire>
I thought that was the idea of docker
<dignifiedquire>
gives you full deniabilty when your infrastructure fails, because nobody knows what's happening
<lgierth>
whyrusleeping: dignifiedquire: i did nothing
<dignifiedquire>
lgierth: that's not a good sign
<whyrusleeping>
lgierth: weird. why did the node on biham restart then?
<whyrusleeping>
and where does stderr go?
<dignifiedquire>
/dev/null probably
<noffle>
hm. go-ipfs sharness tests don't all pass for me on master. known thing, or maybe a config problem on my end?
<noffle>
we have ci for them though, right?
<lgierth>
whyrusleeping: out-of-memory?
<lgierth>
yeah meh i agree that docker is probably not the optimal solution
<lgierth>
ok, so the graphs look interesting
<lgierth>
it went up to 4.1M goroutines, the back down to 1.8M, then was unresponsive for ~37min, then came back up
<lgierth>
and yeah probably out-of-memory, last memory count before the unresponsiveness was 29.8G
<dignifiedquire>
that sounds reasonable ;)
<drathir>
os fightin back for 37 min to get back to health?
<daviddias>
voxelot: you are back? long time no see! we missed you during the rest of the trip :)
jaboja has joined #ipfs
<whyrusleeping>
dignifiedquire: whats the reasoning behind storing stackexchange?
<whyrusleeping>
noffle: which tests?
<voxelot>
daviddias: yeah! openbazaar invited me to amsterdam next month for d10e, but honestly it's kind of nice just not being on international flights for a bit
<voxelot>
are you guys settled back in or still selling your bodies to get around europe?
<daviddias>
sweet!
<daviddias>
still around Europe, now in Lisbon
<voxelot>
i wish i could have stayed
<voxelot>
looking over some of the new js-ipfs stuff you're pushing
<dignifiedquire>
whyrusleeping: to kill biham ;)
<whyrusleeping>
dignifiedquire: lol, thats what i've gathered so far
allmyrickman has joined #ipfs
<allmyrickman>
'sup fam?
<allmyrickman>
back in the summer you were talking about backing up websites, references to "dead man's switches" were made in case these stores of info went down or went proprietary
<allmyrickman>
i was wondering if any of you had talked to the guys at aaaaargh?
<Kubuxu>
Last time I was left with defunc process and removable mount.
<Kubuxu>
s/removable/unremovable/
<whyrusleeping>
sudo umount -f /dev/fuse ;)
<Kubuxu>
It really just hides it. :P
<Kubuxu>
The mount is still stuck in kernel just removed from FS.
<dignifiedquire>
jbenet: you left me :'(
Encrypt has quit [Quit: Quitte]
<allmyrickman>
anyway, largest online humanities library is possibly going down. run by Sean Dockray and Marcell Mars. if anyone is still on this kick to index info-commons sites before they go down they might be people to talk to.
<daviddias>
voxelot: sweet, definitely could use your help working on this stuff, 'there is always some more thing to do' ahah
<Kubuxu>
allmyrickman: how much data there is?
<allmyrickman>
unclear; most is in .pdf not epub, so tbh i'd suspect a lot.
<hannes3>
allmyrickman: heh, i just came here asking about libgen ipfs thingie. surely the aaaarg stuff is in there?
<allmyrickman>
just asking them to make sure everything is hashed / indexed (don't know the term for just making sure a file has a location in the merkel dag is) might be a good step?
<allmyrickman>
are they doing ipfs libgen?
<allmyrickman>
I hadn't heard about it.. dunno if they're redundant or not. i thought libgen was long dead
<allmyrickman>
just searched a book i put up on aaaaarg and it is not there, so they are not redundant though there are undoubtedly redundancies
<allmyrickman>
i hear ipfs is good with that :^)
<dignifiedquire>
allmyrickman: is that all public license or is that pirated stuff as well?
<dignifiedquire>
hannes3: ^^
<hannes3>
all pirated
<allmyrickman>
definitely not public license, hence my suggestion that the files get indexed / into the merkel dag even if no one hosts it
<dignifiedquire>
k
<allmyrickman>
most of the material is put up by the authors, but it isn't public license
<allmyrickman>
"most" -- well, a stretch...
<allmyrickman>
hannes3, can you tell me any more about libgen ipfs?
<hannes3>
nah, all i did was join here and ask if it exists already
s_kunk has quit [Ping timeout: 240 seconds]
<allmyrickman>
nah i don't think it does ;_;
jholden_ has quit [Quit: WeeChat 1.3]
amstocker has quit [Ping timeout: 260 seconds]
anticore has quit [Quit: bye]
<Kubuxu>
Agata, libgen, before 32c3 we thought it was being block in USA somehow, it turned out that they were fed up with US lawers so they blocked whole US and Canada from access.
<Kubuxu>
But archiving it isn't bad idea
jaboja has quit [Remote host closed the connection]
<lgierth>
htop
rombou has quit [Ping timeout: 260 seconds]
nicolagreco has joined #ipfs
<Ape>
1 [|| 3.4%] Tasks: 74, 200 kthr; 1 running
<hannes3>
F9
<hannes3>
<Enter>
<Ape>
Why did my init process suddenly die..
amstocker has joined #ipfs
<allmyrickman>
Marcell actually gets asked about ipfs at the end of the talk i linked; he says he's been in here but lacks enough space to host the files on ipfs, i guess
<allmyrickman>
so nm
<allmyrickman>
i think its ignorant: "we don't need a technical solution we need civil disobedience"
<allmyrickman>
fuckin' a
<allmyrickman>
fuckin' anarchists
<allmyrickman>
fuck i am so butthurt, the longer he speaks about ipfs the more i rage
<whyrusleeping>
just a reminder that we do have a code of conduct, its linked in the topic
<whyrusleeping>
this is a fairly large channel, with lots of different people in it
amstocker has quit [Ping timeout: 250 seconds]
<voxelot>
daviddias: yes i really want to help out! i'm trying to build out my market applications over the next 48 hours
<allmyrickman>
sry... didn't mean to be disrespectful towards anarchists
<voxelot>
then i'm catching a flight to NY this weekend (going to try and talk to consensys about funding perhaps) but im all yours after this weekend!
<voxelot>
finishing up the nodeschool stuff on testing and fs as well
amstocker_ has quit [Ping timeout: 240 seconds]
zorglub27 has joined #ipfs
jgraef has quit [Ping timeout: 260 seconds]
<thelinuxkid>
whyrusleeping: Kubuxu should have the rebase done later today
captain_morgan has quit [Remote host closed the connection]
hannes3 has left #ipfs ["Leaving"]
septeract has quit [Ping timeout: 260 seconds]
<dignifiedquire>
whyrusleeping I'm okay at JavaScraps
<ansuz>
whyrusleeping:
<ansuz>
hi
<ansuz>
dignifiedquire: wasn't it you that linked to that medium article about how bad js is?
<dignifiedquire>
ansuz: yes, but it linked mostly to it because I found it hilarious, rather than taking it serios
<dignifiedquire>
*serious
<dignifiedquire>
also I realised later that it hat a pretty bad, completely unnessecarry attack on the main developer of babel which left a pretty bad after taste..
<ansuz>
I don't know anything about babel
<drathir>
there is a way to force ipfs to use chosen node only?
<ansuz>
except the routing protocol
<dignifiedquire>
drathir: in which context?
<drathir>
dignifiedquire: to force feth data from that server only and not connectin to other nodes...
<drathir>
fetch*
<drathir>
deleted all supernodes all swarms...
<dignifiedquire>
if you have access to both nodes config you could remove all bootstrap nodes and just add the addresses of each other, but that's a horrible hack and might or might not work
<dignifiedquire>
otherwise I'm afraid I don't know
<drathir>
added eachothers pc-s and one external node owned by me...
<ansuz>
if doing that doesn't work, I'd say ipfs swarm adding must be buggy
<ansuz>
that's how cjdns behaves, at least
<drathir>
i will try also remove the external one and left only lan ones...
<ansuz>
network discovery might join you if you have any running daemons nearby
<ansuz>
like cjdns beaconing
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
<thelinuxkid>
Kubuxu: re the fuse conversation you had with whyrusleeping where he mentioned the PR I worked on
<thelinuxkid>
it needs a rebase
<Kubuxu>
ahh
reit has quit [Ping timeout: 260 seconds]
chriscool has quit [Client Quit]
chriscool has joined #ipfs
<ipfsbot>
[go-ipfs] noffle opened pull request #2202: Lets 'ipfs add' include top-level hidden files (master...hidden_files-2145) https://github.com/ipfs/go-ipfs/pull/2202
<dignifiedquire>
just calling load recursively until it doesn't have `nextPage` anymore
<dignifiedquire>
of course if you can do a call with page numbers even better
<dignifiedquire>
then you can do a parallel call Promise.map([1...100], pageNo => fetch(since: since, pageNo))
<The_8472>
es6 promise has no map
<The_8472>
or range syntax for that matter
gwollon is now known as gwillen
<richardlitt>
dignifiedquire: thanks
<richardlitt>
dignifiedquire: Sorry, made the mistake of asking in #javascripts
<richardlitt>
which of course leads to ten minutes of "You're an idiot, this unrelated bit of code you have is wrong."
Encrypt has joined #ipfs
<richardlitt>
Every. Single. Time.
<dignifiedquire>
richardlitt: did they tell you to install jquey?
<richardlitt>
No.
<dignifiedquire>
The_8472: that's why he uses bluebird ;)
<dignifiedquire>
richardlitt: yeah never ask in #javashits
<dignifiedquire>
richardlitt: the chances of getting a good answer there are less than if you'd ask in #go-lang :(
<richardlitt>
dignifiedquire: Well
<richardlitt>
I need to ask somewhere.
<_rht>
(have anyone thought of integration with openai? would it be to distribute/archive the dataset?)
Matoro has quit [Ping timeout: 256 seconds]
<Kubuxu>
It is always like that: Q: I need to do that but I have problem with this. Can you help? A: You don't need to do that, you should do other that and it doesn't matter that it would require major rewrite.
<dignifiedquire>
richardlitt: ask here :)
<Kubuxu>
I stopped asking on IRC about programming because of this.
ashark has quit [Read error: Connection reset by peer]
<richardlitt>
Kubuxu: I mean, to be fair, that is how I learned bluebird
ashark has joined #ipfs
<richardlitt>
dignifiedquire: Why shouldn't I do `Promise.resolve().then()`?
<dignifiedquire>
Promise.try is for this use case and much nicer
<Kubuxu>
My snapping point was when I was hacking in JavaASM because I needed all performance I could get and reflection wasn't enough. I knew what the risk was and I needed help with simple question but instead I got 30 minutes lecture about how bad of a person I am for using JavaASM.
hellertime has joined #ipfs
cemerick has quit [Ping timeout: 260 seconds]
<dignifiedquire>
richardlitt: did it work?
<richardlitt>
dignifiedquire: I don't think so.
libman1 has joined #ipfs
<dignifiedquire>
:(
<richardlitt>
:)
<dignifiedquire>
why are you happy that it didn't work?
Matoro has joined #ipfs
<richardlitt>
Because there's more to fix
<dignifiedquire>
okay :) good luck, let me know if you have other questions, happy to help
tmg has joined #ipfs
anticore has quit [Ping timeout: 245 seconds]
gordonb has quit [Quit: gordonb]
joshbuddy has joined #ipfs
gordonb has joined #ipfs
rombou has joined #ipfs
rombou has left #ipfs [#ipfs]
<richardlitt>
Oh man
<richardlitt>
This is ridiculous
<richardlitt>
yeah, thanks
<richardlitt>
Got it.
<dyce>
hmm, it would cool to have a debian apt reposity based on ipfs
jfis has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<noffle>
could be a good base for such a tool
chriscool has joined #ipfs
<noffle>
or are you suggesting a de facto debian repo that uses ipfs underneath? we already have something like that working for npm, and efforts are happening to get it working for pacman and even nixos
<dyce>
right right so you wouldn't have to change apt-get
The_8472 has joined #ipfs
<dyce>
just point to a new repo, but its backend is ipfs
<tmg>
how do I `ipfs rm`?
<dyce>
also, how much bandwidth can the ipfs gateway support?
* luigiplr
volunteers dignifiedquire as tests fixer
<noffle>
dyce: yep. if you 'dig ipfs.io' you'll see a bunch of IPs
ralphtheninja has joined #ipfs
<dyce>
noffle: so the domain registrar needs to support this?
<noffle>
for you to set up your own set of gateways, or...?
O47m341 has quit [Ping timeout: 264 seconds]
ashark has quit [Ping timeout: 276 seconds]
<dyce>
noffle: im no expert of networking. so afaik, you point a single ip to a domain. but in this case there are multiple ips with ipfs domain
<dyce>
or could that single ip be a server of some sort, which returns back, hey goto this ip, it has the least amount of load right now
<noffle>
dyce: right. so, whatever resolves dns on your system will pick one of those IPs. there's a small TTL on them, so they change somewhat frequently
<noffle>
there's no knowledge of load happening; just handing out IPs from the set randomly in hopes of a reasonably uniform distribution
<noffle>
there MAY be other load balancing going on underneath (jbenet?), but I think that's the crux of what's going on
<lgierth>
no other load balancing
<lgierth>
just dns round robin
<lgierth>
that means if you're lucky you get sent to the gateway in singapore :)
<dyce>
so this is default feature of most domain registrars, you just add multiple ips and it will round robin for you?
<lgierth>
yeah
<lgierth>
the registrar isn't even really involved in this
anticore has quit [Quit: bye]
e-lima has quit [Ping timeout: 250 seconds]
<ralphtheninja>
are we running ipfs nodes on the cjdns network?
<ralphtheninja>
lgierth: nice one .. do I need to do anything in particular to my ipfs daemon to let it know about these peers?
<ralphtheninja>
aye gotcha
prf has quit [Remote host closed the connection]
<lgierth>
your node will eventually discover these addresses, but they're not being preferred and will likely not be used because there are already other connections to these
<lgierth>
we'll make it better very soon
<lgierth>
have all kinds of addresses for the bootstrap nodes in the default config
m0ns00n has quit [Quit: undefined]
<richardlitt>
How do I use .map but return all responses to a single array
montagsoup has joined #ipfs
<richardlitt>
arr.map(fx()).then(fx()) does then for all of the items in arr, not for one map, right?
zorglub27 has quit [Quit: zorglub27]
disgusting_wall has joined #ipfs
O47m341 has joined #ipfs
<richardlitt>
sigh
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
simonv3 has quit [Quit: Connection closed for inactivity]
<ralphtheninja>
richardlitt: what up?
<ralphtheninja>
in javascript?
<richardlitt>
working on it ;)
<richardlitt>
*:)
Matoro has quit [Ping timeout: 256 seconds]
cryptix has quit [Ping timeout: 250 seconds]
<jbenet>
lgierth: how do people work around that? do browsers pick "the closest" node? cant have stayed this dumb for 2 decades
prf has joined #ipfs
<jbenet>
(that = multiple dns A's in different locations)
<jbenet>
dyce: we'll be improving tooling for this use case specifically in the coming weeks so if you help us find problems/desired features/etc good chance to get them built
<achin>
as far as i know, browsers pick a random node
<dyce>
no im not, but i think it would be a great use case for debian
<achin>
there are some DNS services that promise to *reply* with the nodes that are the closest to the clients, but i'm not sure if such things are still fashionable