simonv3 has quit [Quit: Connection closed for inactivity]
felixn has joined #ipfs
<jakoby>
is fuse supposed to be slow?
<jakoby>
copying from fuse rather than 'ipfs get' is significantly slower
cemerick has quit [Ping timeout: 246 seconds]
<lgierth>
dignifiedquire: we don't have a good deployment method which you can do yourself at the moment, i'm afraid -- unless we say that ipns is good enough for now
<lgierth>
Kubuxu: taking care of 030+040 on ipfs.io this week
reit has joined #ipfs
user24 has quit [Quit: ChatZilla 0.9.92 [Firefox 43.0/20151210085006]]
<lgierth>
dignifiedquire: oh you're talking about the webui
<lgierth>
dignifiedquire: yeah word, when you want
Matoro has joined #ipfs
hoony has joined #ipfs
<lgierth>
dignifiedquire: it should be rather simple though, pin the thing and update core/corehttp/webui.go
<lgierth>
dignifiedquire: but i gather you might be looking for a better method eh? :)
<ipfsbot>
[js-ipfs-api] Dignifiedquire created greenkeeper-karma-0.13.19 (+1 new commit): http://git.io/vuEwi
<ipfsbot>
js-ipfs-api/greenkeeper-karma-0.13.19 8342a7d greenkeeperio-bot: chore(package): update karma to version 0.13.19...
paragon has quit [Quit: Leaving]
G-Ray has quit [Ping timeout: 240 seconds]
ppham has quit [Remote host closed the connection]
pfraze has quit [Remote host closed the connection]
mungojelly has quit [Ping timeout: 264 seconds]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<achin>
i'm pondering how to get a list of new ipfs contributions each week, and this is more tricky than i initally thought
<richardlitt>
.tell dignifiedquire no rush on the API code review, I'm going to be overhauling it this week.
<multivac>
richardlitt: I'll pass that on when dignifiedquire is around.
Pharyngeal has quit [Ping timeout: 272 seconds]
<richardlitt>
achin: oh?
<richardlitt>
achin: I'm about to write the weekly roundup; are you working on it?
<achin>
gonna work on a tool to get a list of contributors for each week
<richardlitt>
achin: awesome! Plop it into ipfs/weekly
<achin>
i first thought `get all commits made between now and now-7days`, but that will miss commits that were made more than 7 days ago, but were only merged this week
<richardlitt>
hmmm.
<achin>
so my next thought was: what was the master branch pointing to last week? what is it pointing to this week? calculate the difference between those two trees. but i don't think you can figure out what a given branch was pointing to at an arbitrary time in the past
<richardlitt>
damn
<richardlitt>
What about logging all merges, and all work on those branches?
<richardlitt>
That possible?
<achin>
i think something along those lines is doable. i think we'll need to get a list of all commits made is the desired time frame, but then check to see if any of those commits are merge commits and process those specially
<achin>
i've gotten out a pen and am sketching out some DAGs on graph paper :)
Pharyngeal has joined #ipfs
felixn_ has joined #ipfs
<richardlitt>
:D
<richardlitt>
achin: sweet
<richardlitt>
Do you want to write the roundup for this week, or should I/
<richardlitt>
How about I write it up, and you keep working on that?
<achin>
good plan
<richardlitt>
Word. I'll do that now, then.
felixn has quit [Ping timeout: 265 seconds]
ppham has joined #ipfs
simonv3 has joined #ipfs
felixn_ has quit [Read error: Connection reset by peer]
felixn has joined #ipfs
felixn has quit [Ping timeout: 265 seconds]
<achin>
ahhh perhaps just by using the commit date (instead of the author date), the Right Thing will happen
Oatmeal has quit [Quit: TTFNs!]
Oatmeal has joined #ipfs
computerfreak has quit [Remote host closed the connection]
r04r is now known as zz_r04r
cemerick has joined #ipfs
felixn has joined #ipfs
ppham has quit [Remote host closed the connection]
Oatmeal has quit [Quit: TTFNs!]
Oatmeal has joined #ipfs
BananaLotus has joined #ipfs
pfraze has joined #ipfs
bsm117532 is now known as Guest65074
felixn_ has joined #ipfs
felixn has quit [Read error: Connection reset by peer]
gordonb has quit [Quit: gordonb]
pasta-moose has joined #ipfs
kerozene has quit [Ping timeout: 260 seconds]
ppham has joined #ipfs
jaboja has quit [Ping timeout: 256 seconds]
longguang has joined #ipfs
kerozene has joined #ipfs
Vegemite has joined #ipfs
<richardlitt>
Well
<richardlitt>
I finally got sick of clicking on my notifications
<lgierth>
then reference it as /ipns/yourdomain.net
<lgierth>
noffle: :)
leer10 has joined #ipfs
<leer10>
ipfs add cuts out around the ~30 minute mark of most files
* lgierth
bed
<lgierth>
leer10: try the dev0.4.0 branch, hopefully it's better there for you
<leer10>
lgierth: can ipfs.io access files on dev0.4.0?
<deltab>
what change made 0.4.0 incompatible?
hoony has quit [Ping timeout: 272 seconds]
felixn has joined #ipfs
reit has quit [Ping timeout: 240 seconds]
<VegemiteToast>
anything you can do via the cli you can do via the api?
<leer10>
deltab: I checked and apparently ipfs.io can access files added from 0.4.0 nodes
<VegemiteToast>
what's stopping a node from serving a page with a link like localhost:5001/api/v0/nasty code here
felixn has quit [Ping timeout: 265 seconds]
ppham has quit [Remote host closed the connection]
<VegemiteToast>
maybe I'm dumb. but that seems like an exploit to me
<tperson>
VegemiteToast: Almost all commands on the CLI have a mapping to the API, the exception is just certain flags, such as in `ipfs get`, you can't specify an -o flag in the API.
<tperson>
Only privileged hashs can be served from 5001.
<tperson>
BUt what exactly do you mean by /api/v0/<nasty code>?
<VegemiteToast>
i dunno maybe add ../
<tperson>
as in `/api/v0/add ../` ?
<VegemiteToast>
yis
<tperson>
The API doesn't read anything off the disk like that.
<VegemiteToast>
point is I can see someone saying hey click this api link it fix all your problems
<tperson>
Add takes a POST body that is the content you wish to add.
<VegemiteToast>
sorry very new
<jakoby>
are fuse mounts supposed to be significantly slower than `ipfs get`, etc.?
<achin>
i suppose it could pin something
<tperson>
Not intentionally
<jakoby>
I purposefully pinned it beforehand
<achin>
jakoby: sorry, i was saying that to tperson and VegemiteToast
<tperson>
VegemiteToast, achin: While possible to execute some API commands via GET, you couldn't make someone click a link to it. The browser sets a "Referer" header, which if it doesn't come from the API port it will fail.
felixn has joined #ipfs
<VegemiteToast>
I see
<VegemiteToast>
ty
<tperson>
Just never set your Access-Control-Allow-Origin header in the config to a '*' :)
pfraze_ has joined #ipfs
jfis has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<Stskeeps>
so working on a file somewhere else is a matter of copying a emoji
<patagonicus>
VegemiteToast: Taking things that are the same and just storing them once. For example if you have a couple of gigs of zeros, you'd only store one block of zeros and then write down to use it N tmies.
<VegemiteToast>
If I was transfering file to another node then svered the connection, getting a
<VegemiteToast>
"Error: context canceled"
<VegemiteToast>
could the download be resumed
<Kubuxu>
Block that were downloaded are still there
<ipfsbot>
[go-ipfs] whyrusleeping created fix/api-only-post (+1 new commit): http://git.io/vuzCx
<ipfsbot>
go-ipfs/fix/api-only-post 89eaa73 Jeromy: make api handler only accept POST requests...
<Kubuxu>
so it will continue almost where it stopped
<VegemiteToast>
just do a pin to stop garbage collection?
<VegemiteToast>
hmm, nope
<Kubuxu>
whyrusleeping: POST only is also not the best solution, as some things fit into GET, some into POST and maybe some fit PUT,
<Kubuxu>
VegemiteToast: it won't allow you AFAIK
<Kubuxu>
that is thing worth thinking about
<jakoby>
kubuxu: would an api key work instead?
<jakoby>
like just have it expect a token=###
<VegemiteToast>
patagonicus: ty for the reply
<Kubuxu>
jakoby: still nope as imagine app with some API key allowing users to show images to other users. Someone might link to an image that really is API call to local node and ...
Oatmeal has quit [Ping timeout: 260 seconds]
<Kubuxu>
or yes, tokens might work :P
<VegemiteToast>
tokens for api calls?
<jakoby>
like, just treat the api as if it's not on localhost. haha
<jakoby>
because it defaulting to localhost shouldn't be assumed as secure
<whyrusleeping>
Kubuxu: i'm not super sure... daviddias and I discussed things like 'cat' being POST
<whyrusleeping>
since they change internal daemon state (even though technically the same as a GET to the gateway)
cryptix has quit [Quit: wat]
<Kubuxu>
Hmm, they might be right, and as you probably never will reference API calls in HTML directly it shouldn't be an issue. POST will just make it more secure.
<jakoby>
is webui going to break now? :P
<Kubuxu>
probably :P
<VegemiteToast>
just make harmless querries not require a token, and the rest require one
<jakoby>
watch someone now create an post submit form. haha
<VegemiteToast>
that wouldn't break as many things
<Kubuxu>
token API is non complete in any part
jhulten has joined #ipfs
<VegemiteToast>
i dunno i have no idea what i mtalking about
<whyrusleeping>
having a POST form wouldnt work because of CORS
<whyrusleeping>
GET can get around CORS because otherwise the internet wouldnt work
<whyrusleeping>
but the rest of the methods are filtered
cryptix has joined #ipfs
<jakoby>
oh yeah
jhulten has quit [Ping timeout: 250 seconds]
m0ns00n has quit [Quit: undefined]
<Kubuxu>
AFAIK you need Access-Control-Allow-Origin even for GET
<Kubuxu>
but I might be wrong
<Kubuxu>
it is that for GET the browser won't check before sending request
<Kubuxu>
and even in case of some POST requests it won't make the check beforehand.
<VegemiteToast>
this gonna sound dumb but..
<VegemiteToast>
How do I keep a ipfs daemon running on my vps box, I can only ssh in.
<Kubuxu>
and `sudo loginctl enable-linger [username]`
<VegemiteToast>
enable-linger let the user be logged in forever?
<jakoby>
its daemon to run forever
<jakoby>
linger is only needed for it to start when system boots
<jakoby>
you could do without linger and it should always stay running unless the system reboots
<VegemiteToast>
i see
<VegemiteToast>
ty
<dignifiedquire>
whyrusleeping: I think the progress indicator is not fully working in 0.4
<multivac>
dignifiedquire: 2016-01-06 - 01:37:50 <richardlitt> tell dignifiedquire no rush on the API code review, I'm going to be overhauling it this week.
NightRa has quit [Quit: Connection closed for inactivity]
cemerick has joined #ipfs
nekomune has quit [Read error: Connection reset by peer]
nekomune has joined #ipfs
cemerick has quit [Ping timeout: 256 seconds]
infinity0 has quit [Remote host closed the connection]
hellertime has joined #ipfs
hartor has joined #ipfs
hashcore has joined #ipfs
infinity0 has joined #ipfs
<ianopolous>
VegemiteToast: you could also use screen or tmux if you don't need restart on reboot.
<VegemiteToast>
tywent with nohup. hope it doesn't screw me in future
NightRa has joined #ipfs
Jekert has joined #ipfs
jhulten has joined #ipfs
jhulten has quit [Ping timeout: 265 seconds]
rombou has joined #ipfs
neurosis12 has joined #ipfs
<dignifiedquire>
daviddias: do you think we should use the files api for the files tab in the webui in 0.4?
ralphtheninja has quit [Ping timeout: 246 seconds]
<Kubuxu>
does anyone has IPFS node in 0.4 network with iPv6 connection?
m0ns00n has joined #ipfs
<whyrusleeping>
blech
<whyrusleeping>
missed our flight because they changed the gate last minute
<whyrusleeping>
we made it to the gate two minutes late and they said "sorry, youre too late"
<lgierth>
:/
<whyrusleeping>
and just heard another announcement for a different flight on the intercom
<whyrusleeping>
they said the 'important message' in german
hashcore has quit [Quit: Leaving]
<whyrusleeping>
then started to say it in english "a very important last minute message for passengers en route to..." and then they dropped the intercom
<whyrusleeping>
and said nothing else
<lgierth>
lol these fuckersd
<whyrusleeping>
but, i've found power and wifi
<whyrusleeping>
and their 'free one hour' thing can be defeated by using incognito
<lgierth>
hehe cookie based?
<lgierth>
nice
<ianopolous>
unlike the Germans to be disorganised
Oatmeal has joined #ipfs
ralphtheninja has joined #ipfs
<patagonicus>
Eh, we still like to complain about our tranes and planes. Although they're both probably a lot better than in a lot of other countries.
<patagonicus>
(But Japan is crazy. 30min on a main route will make it in to the news. Here no one cares, except for the people who missed important stuff because of it)
<patagonicus>
Wow, also *trains. I'm still not quite awake,
<ansuz>
whyrusleeping: flying through dusseldorf?
<ansuz>
sounds like something that happened to me there
<Kubuxu>
also while installing it fails to install fsevent
<dignifiedquire>
daviddias: pretty sure it was never official but it is not neede anymore now
<daviddias>
deeeeleeeete :P
<dignifiedquire>
Kubuxu: fs-events is an opt dependency for unix systems
<daviddias>
richardlitt around? would now be a good time to move api to /specs and open the PRs for how each stuff should bahave as discussed on Apps on IPFS sprint meeting?
hellertime has quit [Quit: Leaving.]
<dignifiedquire>
Kubuxu: make sure npm ls is happy the things "should" work
hellertime has joined #ipfs
m0ns00n has joined #ipfs
Gaboose has joined #ipfs
rombou has joined #ipfs
pfraze has joined #ipfs
Soft has quit [Quit: WeeChat 1.4-dev]
ashark has joined #ipfs
<ipfsbot>
[webui] greenkeeperio-bot opened pull request #171: Update i18next-xhr-backend to version 0.2.0
Encrypt has joined #ipfs
rombou has quit [Ping timeout: 246 seconds]
<richardlitt>
daviddias: Yeah, I'm planning on working on it today.
<richardlitt>
We're not moving api to specs, though.
<richardlitt>
daviddias: I'll attack it first and ask for consultation as I come across issues, I have a good idea on how to structure the newest (needed) overhaul. Just trust me for a day or two while I get it all set up, I'll ping as needed.
<tperson>
Morning folks
<whyrusleeping>
tperson: mornin
<whyrusleeping>
i hate flying
<tperson>
Where are you flying too?
<whyrusleeping>
BER -> CPH (Copenhagen)
cemerick has quit [Ping timeout: 256 seconds]
simonv3 has joined #ipfs
arkadiy has joined #ipfs
<noffle>
whyrusleeping: not too bad ;) I'm YYZ->SFO today
<whyrusleeping>
yyz?
<noffle>
toronto
<whyrusleeping>
lol, do you say it: 'why why zed' ?
<noffle>
whyrusleeping: depends, actually! southern ontario is so close to the US that there's a good mix of zee'ers and zed'ers
<noffle>
we watch american TV growing up, so lots of influence of 'zee' too
<noffle>
sesame street, etc
<daviddias>
richardlitt: sure, thank you :) I can help you move it and discuss more in detail any of the endpoints, I'm using your documentation to implement in JS
<richardlitt>
daviddias: cool. It's not being moved anywhere fast; I'm just going to close the PR and do a lot of interbranch work, I think. I'll ping you when that starts.
<richardlitt>
daviddias: thanks for being awesome. ~~~o/
<daviddias>
^5 :D:D
Soft has joined #ipfs
cemerick has joined #ipfs
<noffle>
the us border folks turned me down last week with my job docs for protocol labs :'( trying again today with original signed docs from jbenet
<noffle>
turns out getting docs from new zealand mountains to canada is hard
<richardlitt>
actually, I think it's more that the US Border is a jerk.
<richardlitt>
Sorry. My country sucks. We're working on it.
<noffle>
^^^^
<noffle>
YES
<noffle>
the fella I had was pretty intent on giving me a hard time
<richardlitt>
First plan: Vote Trump into office. He has some good ideas.
<whyrusleeping>
mmmm, if anyone else is in seattle, i want to start up a mesh network soon
<arkadiy>
is anyone in here familiar with the progress of IPFS-LD/IPLD? it looks like mildred is the other major contributor to the spec (aside from juan) but they don't seem to be in the channel
<whyrusleeping>
no, shes not normally in irc
<whyrusleeping>
i would just comment on the issue if you have questions
<arkadiy>
I'm mostly wondering how close to shipping/format freeze things are
<arkadiy>
juan mentioned targeting shipping before the New Year but that didn't seem to happen
<whyrusleeping>
no, i really don't know much about the ipld stuff
<whyrusleeping>
i'm focused on core go-ipfs stuff and shipping 0.4.0
<arkadiy>
got it, will open an issue
hartor1 has joined #ipfs
hartor has quit [Ping timeout: 265 seconds]
hartor1 is now known as hartor
<dignifiedquire>
whyrusleeping: does this "89.35 GB +Inf % -17h43m21s" mean 17h remaining? (cause that number keeps growing)
<dignifiedquire>
or is that elapsed time?
<dignifiedquire>
and is the GB the amount of data remaining or already added?
ipfs-gitter-bot has quit [Read error: Connection reset by peer]
<lgierth>
user24: you one of the dk0tu people?
<user24>
yep
<lgierth>
it's the radio club at one of the berlin unis
<lgierth>
cool hi!
<user24>
hi there
<lgierth>
freifunk too?
<user24>
know some people, helped to install some nodes, but not active
<lgierth>
:)
<lgierth>
we'll have a few ipfs nodes in freifunk soon
<user24>
nice
Soft has quit [Read error: Connection reset by peer]
jhulten has joined #ipfs
hartor1 has joined #ipfs
hartor has quit [Ping timeout: 260 seconds]
hartor1 is now known as hartor
<wiedi>
lgierth: do you have some more details about ipfs on freifunk?
<jbenet>
user24: very cool map ;)
<wiedi>
currently only connected via ffvpn but could probably run something
<jbenet>
:)*
<user24>
jbenet certified™
jhulten has quit [Ping timeout: 272 seconds]
<lgierth>
wiedi: making coffee, will be back in a few
rombou has quit [Ping timeout: 245 seconds]
voxelot has quit [Ping timeout: 265 seconds]
s_kunk has quit [Ping timeout: 240 seconds]
dignifiedquire_ has joined #ipfs
dignifiedquire_ has quit [Client Quit]
rombou has joined #ipfs
voxelot has joined #ipfs
Encrypt has quit [Quit: Quitte]
Senji has quit [Ping timeout: 276 seconds]
voxelot has joined #ipfs
voxelot has quit [Changing host]
user24 has quit [Quit: ChatZilla 0.9.92 [Firefox 43.0/20151210085006]]
<lidel>
Hm.. I am testig some edge cases and curl 'http://127.0.0.1:5001/api/v0/pin/add?arg=/ipns/ipfs.git.sexy/' seems to be hanging forever (0.3.11). Shouldn't there be some kind of timeout?
hartor1 has joined #ipfs
<lidel>
Or maybe it is just me?
<lidel>
I see it blocks other pin requests, looks like a bug
<NightRa>
Question: How is versioning of IPNS pointers works?
<NightRa>
How do we determine what's the newest one?
<NightRa>
Can I request from the DHT an (IPNS link, version), and have old versions maintained by nodes in the network?
<NightRa>
Needing this for materializing cyclic graphs of IPNS/Hash trees
<NightRa>
For when I encounter a back edge, I need this to be finite and static. Can't do rec. hashing of course, but can represent as a pair of a name (IPNS ptr) & version.
<NightRa>
This is one solution I thought about
rombou has joined #ipfs
<richardlitt>
dignifiedquire: done
<ipfsbot>
[js-ipfs-api] Dignifiedquire created greenkeeper-babel-core-6.4.0 (+1 new commit): http://git.io/vuVEG
<ipfsbot>
js-ipfs-api/greenkeeper-babel-core-6.4.0 dd4d39b greenkeeperio-bot: chore(package): update babel-core to version 6.4.0...
<multivac>
richardlitt: I'll pass that on when whyrusleeping is around.
<Guest9082>
here, this was the first ever ipfs link I viewed QmRJr89zGpdnGyhZXSnfU2VbZbyJm6RQjy4kw6A5AWdHFW
<dignifiedquire>
richardlitt: thanks
<richardlitt>
Thanks for the PR!
<ipfsbot>
[js-ipfs-api] Dignifiedquire deleted greenkeeper-babel-core-6.4.0 at dd4d39b: http://git.io/vuVu6
cemerick has quit [Ping timeout: 255 seconds]
<ipfsbot>
[js-ipfs-api] Dignifiedquire created greenkeeper-babel-plugin-transform-runtime-6.4.0 (+1 new commit): http://git.io/vuVz0
<ipfsbot>
js-ipfs-api/greenkeeper-babel-plugin-transform-runtime-6.4.0 f879931 greenkeeperio-bot: chore(package): update babel-plugin-transform-runtime to version 6.4.0...
rendar has quit [Ping timeout: 246 seconds]
<ipfsbot>
[js-ipfs-api] Dignifiedquire deleted greenkeeper-babel-plugin-transform-runtime-6.4.0 at f879931: http://git.io/vuVgA
<dignifiedquire>
richardlitt: one last commit for you to look at before I merge
rendar has joined #ipfs
<richardlitt>
dignifiedquire: cool. Still think `size` should be changed.
<richardlitt>
but functionally, looks good to go.
rombou has quit [Ping timeout: 256 seconds]
rombou has joined #ipfs
hellertime has quit [Quit: Leaving.]
cemerick has joined #ipfs
ashark has quit [Ping timeout: 265 seconds]
<dignifiedquire>
richardlitt: fixed and merged
<ipfsbot>
[webui] Dignifiedquire pushed 1 new commit to master: http://git.io/vuVwY
<achin>
btw i saw your comment about looking at the github event log. i think this is the right way to get a list of commenters (which jbenet wanted). i'm not sure it offers any advangages for getting the list of commits
<richardlitt>
Shouldn't take more than a minute
<achin>
one sec, let me dig up that branch
<richardlitt>
Ok. In that case, I can merge that now .Wish you'd written it in JS. :P
<achin>
i almost wrote it in rust :P but then nobody would have been happy with that!
<richardlitt>
Hahaha
<richardlitt>
Well, if you had in JS, I could have helped with the commenters, and we could have made one module for everything
<richardlitt>
I might refactor and rewrite it anyway
<achin>
i'm way more productdive in python, so i used that. but i have no objections to a js version
<achin>
if someone gets me started, i could even help!
patcon has joined #ipfs
<achin>
how long do we want to wait for roundup feedback before merging? is 48 hours too long? could we say "at most 48 hours"
<achin>
(so that if discussion is settled before then, it can be merged before 48 hours)
rombou has quit [Ping timeout: 246 seconds]
<achin>
also, you suggested PR directly into the roundups folder, yet we have a drafts folder at the moment
<richardlitt>
ah, sorry
<richardlitt>
24 hours.
<achin>
should i change the readme to say "PR into the roundups folder" and we'll just fixup the current directory structure later to match ?
<richardlitt>
And that's it. I don't think we need more than that, and we need to enforce a harder deadline
<richardlitt>
PR into the drafts folder
<achin>
k
<richardlitt>
We only need one folder
<richardlitt>
the files should not be moved around - the PR will be the only movement
<richardlitt>
minimizes upkeep
<richardlitt>
Makes sense?
<achin>
i had an idea that a draft could be merged before publiciation time, which would allow others to submit additional PRs on that draft
<richardlitt>
sorry for the lack of clarity, named the folder wrongly ;^__^
<achin>
but in practice i can't really see that happening often
<richardlitt>
Nah. It wouldn't.
<richardlitt>
What we can do is make sure that we all have access to that repo, and that anyone can add commits directly on a remote PR
disgusting_wall has joined #ipfs
<richardlitt>
But, ideally, I think just one person folding in comments should be enough
<achin>
yes, i expect there will not be many cooks in this kitchen
<richardlitt>
If there are, indeed, any comments. I think this will largely be driven by one or two people, with everyone else focusing on their own little worlds
<richardlitt>
haha yeah
<achin>
PR updated
<achin>
PR #3 that is
<Kubuxu>
achin: with the tool for commiters, if we had hash from previous week, then you could just walk the DAG until you hit that hash and fork on merge
<achin>
Kubuxu: do you have a proposal on how to get the hash from the previous week?
<Kubuxu>
Just save it in weekly repo?
<richardlitt>
Merged.
<Guest9082>
how could make a physically decentralised network? what would a persons node setup look like?
<richardlitt>
Kubuxu: what hash?
<achin>
richardlitt: let's also try to land a reasonable convention for naming the .md files in the roundup folder
<Kubuxu>
commit hash
<richardlitt>
Yeah, but there's a lot of repos
Encrypt has joined #ipfs
<richardlitt>
Wouldn't work for 60 odd repos, I don't think. I mean, we could save a hash file of 60 repos....
Guest9082 is now known as VegemiteToast
<richardlitt>
{'repo1:' hash1, 'repo2': hash2...}
<Kubuxu>
Save it as IPFS file.
<Kubuxu>
Yuo
<Kubuxu>
Yup
<richardlitt>
Not too bad of an idea, as long as the contrib tool automatically updates it so we don't have to remember to save it
<achin>
the problem is that you have to remember to run the tool at the right time
<achin>
seems like we might mess that up from time to time
<richardlitt>
No, you just get it from the last hash
<richardlitt>
then we just have "Since last time, which was 3.4 weeks ago"
<richardlitt>
So it's resilient in case we fail at updating properly
<achin>
oh, hmm. yeah i see your point
<Kubuxu>
Make a Travis tool that does it when new roundup gets merged.
<richardlitt>
Kubuxu: nah, just make it part of the process of getting the contribs from last time
<richardlitt>
That way there's no one unattributed
<Kubuxu>
That works too.
<Kubuxu>
So the tool would look for previous roundup, read the JSON with hashes from it. Walk the DAGs, get contributors, save new hashes in new roundup.
<achin>
richardlitt: actually, i take back what i said awhile ago -- using the event log to get a list of commits might be a good idea, because it makes it a lot easier to get the github username of the person making the commit
<richardlitt>
Kubuxu: exactly
<richardlitt>
achin: I think so, too.
<richardlitt>
If you want me to refactor in JS, let me know, happy to do that.
<achin>
please go ahead!
<richardlitt>
achin: we should probably get the contributors since December 7th
<richardlitt>
Want to run that for me, and throw me the output in a gist?
<richardlitt>
I'm writing the draft for this week's roundup now
<achin>
richardlitt: i assume weekly/issues #7 is for general feedback on the process, not specfic feedback on any given roundup?
rombou has joined #ipfs
<lgierth>
whyrusleeping: are there more breaking changes in 0.4.0 apart from multistream? lots of subtly breaking stuff i guess, but anything more that breaks hard?
<lgierth>
i'm fine with saying that 0.4.x and 0.3.x simply can't communicate because of multistream, just wondering if there's anything noteworthy
<VegemiteToast>
what's multistream. and why should I be sad?
<VegemiteToast>
"low distraction music to work to"
TheWhisper has quit [Read error: Connection reset by peer]
rombou has quit [Ping timeout: 272 seconds]
<vakla>
hey guys, a friend set up ipfs on a research cluster at a university and for some reason the webui is showing one point and it's in Canada (definitely not the right place)
TheWhisper has joined #ipfs
<vakla>
is there any reason why this would be? freegeoip shows the right location for them
<VegemiteToast>
where's the uni?
<lgierth>
vakla: the geoip version it uses might be a bit outdated
<vakla>
Chicago
<VegemiteToast>
if its a "research cluster" the back end might be hosted in canada lol
<VegemiteToast>
i dunno, as usuall, I know nothing
<vakla>
No the servers are definitely on site
<VegemiteToast>
any tips for setting up php + ipfs on linux
<VegemiteToast>
wait I think php might not work with ipfs?
<VegemiteToast>
submit it as a bug on the git
<VegemiteToast>
:Vakla
<achin>
php requires a backend server to execute the php code
<vakla>
cheers
<VegemiteToast>
I has a backend. a backend I want to make
<VegemiteToast>
are u saying a http backend?
<achin>
yes, generally it's the http server that will execute the PHP code
<achin>
(or will spawn a child process to do so)
ashark has joined #ipfs
<whyrusleeping>
copenhagen is nice
<multivac>
whyrusleeping: 2016-01-06 - 20:52:11 <richardlitt> tell whyrusleeping to throw his todos into https://github.com/ipfs/pm/issues/77 please
<richardlitt>
hee hee
<richardlitt>
where are you staying?
<whyrusleeping>
lgierth: we remove the extra msgio wrapper
<whyrusleeping>
richardlitt: somewhere in denmark
<whyrusleeping>
david said a name and a number, i typed it into the uber app
<whyrusleeping>
and now we're here
<ralphtheninja>
hehe
<ralphtheninja>
chaos :)
<achin>
magic!
<richardlitt>
where are you staying?
<richardlitt>
gahg
<richardlitt>
Cool
<whyrusleeping>
oh shit
<whyrusleeping>
real internet
<whyrusleeping>
i'm not gonna leave
<dignifiedquire>
how real?
<achin>
the kind of internets us americans dream about?
<ipfsbot>
[go-ipfs] Luzifer opened pull request #2169: Update alpine to latest stable (master...update_alpine) http://git.io/vuwUc
feross_ is now known as feross
Encrypt has quit [Quit: Quitte]
ashark has quit [Ping timeout: 256 seconds]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
bbfc has left #ipfs [#ipfs]
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
arkadiy has quit [Ping timeout: 252 seconds]
infinity0 has quit [Ping timeout: 256 seconds]
hehe123 has joined #ipfs
<hehe123>
Yo, is a Link in an IPFS object required to exist on the node that is creating the object?
<achin>
no
wiedi has quit [Read error: Connection reset by peer]
<achin>
though i think most of the high-level IPFS tools will want it to exist
<achin>
(maybe even most of the low-level ones too?)
pfraze has quit [Remote host closed the connection]
<achin>
example: run `ipfs object get QmNTbyFZSRaeYLX8utaQmAvuckMwPkoTxRyecURt6EDGAC` and notice how the linked hash doesn't exist anywhere
<hehe123>
Sweet, just confirmed myself, must have been doing something wrong when I tried before
infinity0 has joined #ipfs
wiedi has joined #ipfs
<hehe123>
Does the lookup of an object have a timeout? Seems to just hang for me with an object that doesn't exist
<hehe123>
Are there any plans for tools to deal with the grabbing of big files, like torrent clients do? Or is the intention to rely on browsers to provide that capability?
patcon has joined #ipfs
patcon has quit [Read error: Connection reset by peer]