jbenet changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- Code of Conduct: https://github.com/ipfs/community/blob/master/code-of-conduct.md -- Sprints: https://github.com/ipfs/pm/ -- Community Info: https://github.com/ipfs/community/ -- FAQ: https://github.com/ipfs/faq -- Support: https://github.com/ipfs/support
<lgierth> daviddias Luzifer: what's the trick for building dev0.4.0 on gobuilder?
<Luzifer> Don't know. What's the question?
<Luzifer> Or the issue?
r04r is now known as zz_r04r
<lgierth> how to trigger a build of not-master
<lgierth> i recall there was some ceremony required for it
Guest20197 has quit [Quit: Leaving]
<Luzifer> Ah. If it's a branch different from master it's not built automatically. But you can do a manual build using `repo@branchortagorsha1`
<Luzifer> So in this case `...ipfs@dev0.4.0`
<Luzifer> (sorry, only on the phone in bed, that's why I'm not using a copyable path)
<lgierth> thanks Luzifer :)
<Luzifer> You're welcome :)
arkadiy has quit [Ping timeout: 252 seconds]
<lgierth> my bad, typo
<Luzifer> And now: good night :D
<lgierth> anyhow, good night!
computerfreak has joined #ipfs
pfraze_ has quit [Read error: Connection reset by peer]
disgusting_wall has quit [Quit: Connection closed for inactivity]
pfraze_ has joined #ipfs
jrabbit has quit [Changing host]
jrabbit has joined #ipfs
Matoro has quit [Ping timeout: 256 seconds]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
pfraze__ has joined #ipfs
pfraze_ has quit [Read error: Connection reset by peer]
Matoro has joined #ipfs
simonv3 has quit [Quit: Connection closed for inactivity]
jgraef2 has joined #ipfs
<Shibe> if I have an ipfs folder and a daemon is running there, do I have to ipfs add all the files?
<Shibe> or will it just serve them whenever somebody needs it
<achin> when you add a folder to ipfs resursively (with the -r flag) every file in that folder is also added to ipfs
jgraef has quit [Ping timeout: 272 seconds]
<lgierth> Shibe: it doesn't care about the directory it's running in
<lgierth> only what you add, or fetch from others, will be stored
<Shibe> so I have to add every file there?
<lgierth> add -r
<Shibe> okay
<Shibe> lgierth: if I ipfs get a file, is it automatically added?
<achin> it is automatically cached by your node, but might be deleted later if you run "Ipfs repo gc"
pfraze__ has quit [Read error: Connection reset by peer]
<Shibe> ok
pfraze_ has joined #ipfs
libman has quit [Ping timeout: 260 seconds]
pfraze_ has quit [Read error: Connection reset by peer]
pfraze_ has joined #ipfs
libman has joined #ipfs
HastaJun has quit [Ping timeout: 265 seconds]
libman1 has joined #ipfs
libman has quit [Ping timeout: 240 seconds]
voxelot has quit [Ping timeout: 264 seconds]
mec-is has quit [Ping timeout: 252 seconds]
pfraze has joined #ipfs
pfraze_ has quit [Read error: Connection reset by peer]
<lgierth> ok friends, i need the combined wisdom of #ipfs -- on a server, i want to send the same incoming http requests to two different backends simultaneously, and hand give the first "successful" response to the client. any suggestions? nginx and haproxy both seem to not support this kind of multiplexing. does anyone know of other http servers that do support this, or tcp-level tools?
<brimstone> lgierth: a short timeout isn't sufficient?
<lgierth> we don't really have a notion of a timeout in this case
<brimstone> i guess because the request might take a while
<lgierth> yeah
<lgierth> the two backends are ipfs-0.3.x and ipfs-0.4.0 :)
<lgierth> we want to have ipfs.io backed by both for a while
<noffle> lgierth: just plain http requests? maybe just whip up a quick tiny http proxy server in go or node (can help ya with node side!)
pfraze has quit [Read error: Connection reset by peer]
<brimstone> that's my reaction, since i can't find anything by googling either
<lgierth> noffle: yeah i think i'm gonna go with that if nobody brings up something good until tomorrow :)
pfraze has joined #ipfs
patcon has quit [Ping timeout: 264 seconds]
<lgierth> very very hairy
<brimstone> that's so gross
<lgierth> :P
patcon has joined #ipfs
Matoro has quit [Quit: shutting down]
<drathir> lgierth: but in theory that shouldnt workin i guess bc what happen if first instance faster recive end the stream than second one?
pfraze has quit [Read error: Connection reset by peer]
<lgierth> the case will generally be that either 1) one instance is very much faster that the other, or 2) both time out hard
<lgierth> so milliseconds don't count
cemerick has joined #ipfs
pfraze has joined #ipfs
<drathir> haproxy should handle that in theory, bu if timeoots are really small always will pick the first one i guess...
<lgierth> i couldn't find anything close in haproxy's docs
<drathir> thats workin with two tor instances needed one reeally go down to switch into second one...
<lgierth> it's not about one instance or the other going down
<lgierth> it's that the content that requested, might be on either one of them
<lgierth> so i *need* to try both -- and i can't do it one after the other because that might take ages
pfraze has quit [Read error: Connection reset by peer]
boltmaker has quit [Read error: Connection reset by peer]
pfraze has joined #ipfs
libman1 has left #ipfs [#ipfs]
<M-davidar> jbenet (IRC): yeah?
Matoro has joined #ipfs
patcon has quit [Ping timeout: 245 seconds]
disgusting_wall has joined #ipfs
<Igel> ughk.. the hell was that comcast
* Igel pinged out
<Igel> not even sure if my question reached beyond the proxy.. i'll wait an try back tomorrow l8rz..
<Shibe> all ipfs hashes start with "Qm" right?
<achin> yep
antipuseduo has joined #ipfs
<lgierth> no
<lgierth> Qm is one specific multihash
<lgierth> sha256 in base58
<achin> well, in all known versions of ipfs, Qm is the hash in use at the moment
<lgierth> ok ;)
<achin> but you have the more correct answer :)
<lgierth> i'm pretty sure though that the reason for the question was some variation of "can i assume that it'll always be Qm" :)
<achin> though i've tried to create other hashes, but ipfs generally failed to work wiht them
<lgierth> yeah? like create something with a non-Qm hash on one host, fetch it on the other?
<lgierth> i would have expected that to work
<achin> yeah
<lgierth> wanna file an issue? we should get that fixed
<jbenet> achin can you make a test case for that?
<achin> sure thing. low priority, i'll get to it something this week (i need to retest with the latest master and dev040
<jbenet> davidar: hello :) -- q: how many "files" do you think there are in all the archives stuff-- super roughly, order of magnitude estimate
<achin> i believe all i did was calculate the new hash and then copy a file from $IPFS_PATH/blocks into its new home ('new home' == new path based on the new hash)
<M-davidar> jbenet (IRC): ooh, good question, I'd say in the millions, perhaps?
<lgierth> openwrt alone is a few 100k (although that's not completely added yet)
<achin> i alone have more than 1 million files on my nodes
<lgierth> jbenet: use kg as the unit
<lgierth> nobody will notice!!11
<achin> bits are heavy, yo
<lgierth> it's big data after all
<M-davidar> achin (IRC): so 1-10M in total maybe?
<achin> (doing a find|wc-l at the moment to get a more accurate list of the two biggest repos i have)
<achin> sure, that seems about right
<achin> i need faster disksssss
<achin> at the moment, a lot of the archives are just single-file data dumps. like the GPG database is 7.7GB but only in a few hundred files
<achin> but between the cdnjs archive and the wikispecies archive, i'm hosting 2646000 files
jgraef2 has quit [Ping timeout: 255 seconds]
<jbenet> davidar achin thanks :) -- I estimated we have 100M-1B files on the network atm. I don't think I'm too far off.
<achin> M-davidar: btw i was glad to see that you opened #53. some of the hashs that i've collected (and patched into my archives tree) seem to have disappeared from the network
<achin> having the ipfs org able to pin some of this data (at least for the short-term) would be great
grahamperrin has joined #ipfs
<lgierth> achin: we should referenence all the archives under ipfs.io/refs eventually
<M-davidar> achin: awesome, I was hoping you'd be on board with that :)
<achin> yeah. do you all have a mechanism to keep ipfs.io/refs updated?
<lgierth> github.com/ipfs/refs
<M-davidar> lgierth (IRC): yeah, once we get them organised into a single tree it should be pretty trivial to get it into refs
<lgierth> it's supposed to build the whole /refs tree from all kinds of sources
<lgierth> or that yeah
<achin> does that repo accept PRs? does it make sense to start to build up a /refs/archives playground?
<lgierth> sure
<achin> (there is at least 1 repo that i can keep easily keep updated daily, and pinned)
<lgierth> deploying it to ipfs.io/refs is a bit of a hassle at the moment because it involves updating TXT refs.ipfs.io
<lgierth> but other than that it's cool
dignifiedquire has quit [Quit: Connection closed for inactivity]
<lgierth> re: deployment, you can drop me an issue in ipfs/ops-requests for simple requests like that, e.g. "update refs.ipfs.io to Qmfoobar please"
<M-davidar> achin (IRC): that would be awesome :D
<achin> do i just plop something in versions/current? my quick skim of this repo is failing to fully grok it
<lgierth> yeah most of the code is for building a dag from the refs-denylists-dmca repo
<lgierth> a bit of it is also for handling the "previous" link
<lgierth> i.e. for versioning
<achin> while we are on the subject of archives, do any of you have any thoughts on my comment at the bottom of https://github.com/ipfs/archives/issues/20#issuecomment-170539524 ?
<lgierth> maybe it could do the same for the archives repo, or you let it resolve an ipns key or so
<achin> at the moment, a lot of the archiving efforts are still in-progress. i think a manual process is pretty OK for the moment
<lgierth> achin: i promise i'll have a look, kinda forgot about it today.
<achin> no problem!
<achin> mondays are always tough
<achin> (i fear that having a massively flat merkledag that keeps every wikipedia page directly underit will untenable)
<lgierth> the npm archive is the same
<achin> with how many files?
jgraef2 has joined #ipfs
<lgierth> we'll work around it by faning large unixfs directories out to multiple
<lgierth> somehow
<lgierth> a few 100k i think
<lgierth> daviddias knows more
<achin> i'd love to be CC'd on taht discussion when it happens
<achin> wikipedia will be at least 5million
<achin> (there are 5 million articles, but that's not counting redirections links)
<achin> ((which ipfs handles really well of course!))
<achin> thanks!
<M-davidar> achin: yeah, I'm not able to access it either :/
<lgierth> oooh got it
<lgierth> that /A object is just mega large?
<achin> yeah, megalarge
<lgierth> haha yeah
<achin> something like 600k links under it
<lgierth> #wontfix :P
<achin> noooo, i want to biggggerr!
<lgierth> let's verify that that's the issue though
jgraef2 has quit [Ping timeout: 240 seconds]
<M-davidar> achin: does 3-4hrs for that many files seem reasonable? I would have hoped it to be a little faster tbh
<achin> i too had hoped for it to be faster, but i have some evidence that i'm diskIO bound
<lgierth> you might be seeing your repo size grow much faster than you'd expect, too
joshbuddy_ has joined #ipfs
joshbuddy has quit [Read error: Connection reset by peer]
joshbuddy_ is now known as joshbuddy
<lgierth> are you adding via ipfs add, or via the files api?
<achin> via ipfs-add
<lgierth> ok
<lgierth> because at least with the files api, there is one more issue in addition to the sheer number of links hitting all the limits
<achin> i'm not a disk-performance guru, so i'll need to do some more research on how to profile this
<achin> but at the moment, i'm not pushing ipfs to be faster
<lgierth> with every link added, a new temporary object is created, and they get huge of course for so many links (we're talking 10 to 100 MB)
<lgierth> but that problem might simply not exist with ipfs add itself
<achin> the current on-disk size of the node in question is about 40MB i believe
<M-davidar> achin: ssd or spinning metal?
<achin> metal spinning at 7200. i have 6 disk part of a 2-vdev zfs pool
<M-davidar> hmm, my gut tells me ipfs spends a lot of time seeking on hdd's compared to ssd's
<achin> i'd love to get an SSD as a write cache, but i'm out of drive bays
<achin> i've got plans for a bigger NAS (which i intend to include a write cache drive), but that is many many moons away
<M-davidar> whyrusleeping: is it possible to profile how much time is spent seeking?
<M-davidar> achin: of course, I don't think the solution should be "buy an SSD" :p
<lgierth> yeah random accesses are definitely faster on ssds
<achin> well.. dumping many millions of files/billions of bytes into ipfs isn't exactly a common workload
<lgierth> GB/$ is also much higher ;)
<lgierth> achin: don't say that! ;)
<lgierth> we want this to work well
<drathir> ipfs support resume in pinning?
<achin> ok ok! i take it back :)
<achin> but now you have to help me prove that ipfs can be faster :)
<lgierth> i wanna put even more than just millions or billions of stuffs into ipfs
<lgierth> word! it'll become ever faster
<lgierth> and more resilient
<achin> ipfs-zoomzoom
patcon has joined #ipfs
<achin> the hash in here doesn't seem to be the right one https://github.com/ipfs/refs/blob/master/versions/current
<achin> (it contains a links subdir, but i'm not able to find it)
cemerick has quit [Ping timeout: 245 seconds]
moreati has quit [Ping timeout: 245 seconds]
<achin> another idea is: instead of using ipfs.io/refs/archives as a playground, we can just do something in the ipfs/archives repo
<lgierth> yeah and build some code there that builds the dag :)
voxelot has joined #ipfs
<lgierth> and refs.git can just call that code
<lgierth> achin: nevermind the versions/ files
<lgierth> they'll be gone with the next commit
<achin> consider them ignored!
<achin> M-davidar: thanks for the thing with archives!
<M-davidar> achin: :)
<drathir> also will be great to have the progress of pinning...
simonv3 has joined #ipfs
pfraze has quit [Remote host closed the connection]
patcon has quit [Ping timeout: 246 seconds]
pfraze has joined #ipfs
cemerick has joined #ipfs
reit has joined #ipfs
Qwertie has quit [Quit: Cya o/]
wowaname has joined #ipfs
reit has quit [Ping timeout: 256 seconds]
reit has joined #ipfs
boxxa has joined #ipfs
HastaJun has joined #ipfs
pfraze has quit [Remote host closed the connection]
devbug has joined #ipfs
ygrek has quit [Ping timeout: 255 seconds]
Matoro has quit [Quit: shutting down]
Matoro has joined #ipfs
antipuseduo has quit [Read error: Connection reset by peer]
truantship has joined #ipfs
devbug has quit [Ping timeout: 245 seconds]
the193rd has quit [Quit: Quitten]
indigane has left #ipfs ["Leaving"]
simonv3 has quit [Quit: Connection closed for inactivity]
hoony has joined #ipfs
Matoro has quit [Ping timeout: 240 seconds]
hoony has quit [Ping timeout: 264 seconds]
hoony has joined #ipfs
NightRa has joined #ipfs
m4nu_ has joined #ipfs
felixn has joined #ipfs
Matoro has joined #ipfs
pfraze has joined #ipfs
wowaname is now known as horkhorkhork
hoony has quit [Ping timeout: 250 seconds]
rendar has joined #ipfs
horkhorkhork is now known as opal
ulrichard has joined #ipfs
boxxa has quit [Quit: Connection closed for inactivity]
hoony has joined #ipfs
pfraze has quit [Remote host closed the connection]
devbug has joined #ipfs
dyce has joined #ipfs
mildred has joined #ipfs
devbug has quit [Ping timeout: 240 seconds]
jaboja has joined #ipfs
<dyce> if i add the same file on two different nodes, it should be the same hash?
mildred has quit [Ping timeout: 264 seconds]
jhulten has quit [Ping timeout: 265 seconds]
<deltab> dyce: if the same chunking strategy is used, yes (aiui)
<dyce> deltab: but if i take both nodes down, the gateway wont be able to reach the hash?
<deltab> right
<deltab> the content has to be available on a node somewhere; the DHT just points to where it can be found
<dyce> so sort of like a personal/p2p git repo? does it select at random which node to use?
<deltab> yes, crossed with bittorrent
<deltab> it'll connect to any nodes it can get the chunks from, like bittorrent does
<deltab> the more nodes there are, and the better their connections to you, the faster you'll get the file
<dyce> nice. i could see that bandwidth could be a concern for the gateway. can you volunteer as a gateway?
<dyce> like tor
<dyce> in some way
<deltab> maybe, I don't know
<deltab> simply having a node running helps with the DHT
<deltab> and if you pull content to the node, it becomes available to others
<deltab> there are 'pinbots' that do that
unforgiven512 has quit [Quit: ZNC - http://znc.in]
<deltab> on request they'll download something and share it with others
<dyce> i see
<dyce> oh i think ipfs daemon basically is a gateway
mildred has joined #ipfs
<dyce> on the webui
<deltab> oh, you can run your own gateway; for your use only or open to others, it's your choice
m0ns00n has joined #ipfs
Senji has joined #ipfs
cemerick has quit [Ping timeout: 272 seconds]
ecloud is now known as ecloud_wfh
anon3459 has joined #ipfs
reit has quit [Ping timeout: 264 seconds]
mildred has quit [Read error: No route to host]
mildred has joined #ipfs
unforgiven512 has joined #ipfs
devbug has joined #ipfs
devbug has quit [Ping timeout: 272 seconds]
Tv` has quit [Quit: Connection closed for inactivity]
truantship has quit [Ping timeout: 240 seconds]
felixn has quit [Quit: Textual IRC Client: www.textualapp.com]
elima has joined #ipfs
zz_r04r is now known as r04r
jhulten has joined #ipfs
NightRa has quit [Quit: Connection closed for inactivity]
myrmicid has joined #ipfs
jhulten has quit [Ping timeout: 245 seconds]
dignifiedquire has joined #ipfs
joshbuddy has quit [Quit: joshbuddy]
joshbuddy has joined #ipfs
SebastianCB has joined #ipfs
Encrypt has joined #ipfs
computerfreak has quit [Remote host closed the connection]
Matoro_ has joined #ipfs
Matoro has quit [Ping timeout: 240 seconds]
<SebastianCB> Not that ipfs is lacking potential but here two glossy articles that could serve as reference for databases in a scientific context (and centralization issues):
disgusting_wall has quit [Quit: Connection closed for inactivity]
<dignifiedquire> good morning everyone
hoony has quit [Quit: hoony]
m0ns00n has quit [Quit: undefined]
s_kunk has joined #ipfs
joshbuddy has quit [Quit: joshbuddy]
<Kubuxu> \o
m0ns00n has joined #ipfs
cryptix has joined #ipfs
cryptix has quit [Client Quit]
<dignifiedquire> whyrusleeping: daviddias you gonna love this https://medium.com/@wob/the-sad-state-of-web-development-1603a861d29f#.soo884rnp
<ansuz> I agree with most of the points, but actually find js to be a wonderful language
<ansuz> development practices mostly do suck
<ansuz> and dependency graphs are often absurd in the ecosystem
cryptix has joined #ipfs
<ansuz> the problems are mostly pretty easy to avoid assuming you have control over which packages you use
joshbuddy has joined #ipfs
zorglub27 has joined #ipfs
M-victorm has quit [Ping timeout: 240 seconds]
M-victorm has joined #ipfs
voxelot has quit [Ping timeout: 240 seconds]
M-edrex has quit [Ping timeout: 240 seconds]
M-edrex has joined #ipfs
rektide has quit [Ping timeout: 240 seconds]
rschulman has quit [Ping timeout: 240 seconds]
chriscool has quit [Ping timeout: 250 seconds]
M-nated has quit [Ping timeout: 250 seconds]
M-rschulman1 has joined #ipfs
M-nated has joined #ipfs
M-eternaleye has quit [Ping timeout: 250 seconds]
<dignifiedquire> ansuz: yeah, but you do have only a limited control over the packages often and this notion of splitting projects into such small pieces that when you start installing them you actually need a list of modules to install before they do anything is just getting rediculous
m0ns00n has quit [Quit: undefined]
kaliy has quit [Ping timeout: 250 seconds]
chriscool has joined #ipfs
<dignifiedquire> I enjoy writing react and es6 and javascript a lot, but the tooling and complexity is just going through the roof, often without any real benefit
M-davidar has quit [Ping timeout: 240 seconds]
rektide has joined #ipfs
kaliy has joined #ipfs
moreati has joined #ipfs
M-eternaleye has joined #ipfs
<ansuz> yea
<ansuz> personally I have complete control over everything I use
<ansuz> and usually I write from scratch
<ipfsbot> [js-ipfs-api] Dignifiedquire created greenkeeper-stream-http-2.0.3 (+1 new commit): http://git.io/vzYmj
<ipfsbot> js-ipfs-api/greenkeeper-stream-http-2.0.3 c5c7086 greenkeeperio-bot: chore(package): update stream-http to version 2.0.3...
<ansuz> I was writing scheme before I got into js
M-davidar has joined #ipfs
<ansuz> sometimes the tiny modules are perfect, sometimes they're stupid
<ansuz> I see a lot of projects with lodash, underscore, and a few other similar libs in their tree
<whyrusleeping> dignifiedquire: thats a great article
<ipfsbot> [js-ipfs-api] Dignifiedquire deleted greenkeeper-stream-http-2.0.3 at c5c7086: http://git.io/vzY30
jfis has quit [Read error: Connection reset by peer]
<dignifiedquire> whyrusleeping: I knew you'd enjoy that one
bedeho has joined #ipfs
<ipfsbot> [js-ipfs-api] Dignifiedquire created greenkeeper-stream-http-2.0.4 (+1 new commit): http://git.io/vzY3o
<ipfsbot> js-ipfs-api/greenkeeper-stream-http-2.0.4 0a2703e greenkeeperio-bot: chore(package): update stream-http to version 2.0.4...
<ansuz> I often feel like I'm the only nodejs dev who's just writing on top of http
<ansuz> express is everywhere
<ipfsbot> [js-ipfs-api] Dignifiedquire created greenkeeper-stream-http-2.0.5 (+1 new commit): http://git.io/vzYsT
<ipfsbot> js-ipfs-api/greenkeeper-stream-http-2.0.5 67d15a6 greenkeeperio-bot: chore(package): update stream-http to version 2.0.5...
<The_8472> "perfection is achieved not when there is nothing more to add, but when there's nothing left to take away"
cryptix has quit [Ping timeout: 250 seconds]
jhulten has joined #ipfs
<ansuz> inside every large program is a small program struggling to get out
<ipfsbot> [js-ipfs-api] greenkeeperio-bot opened pull request #187: stream-http@2.0.5 breaks build
opal has quit [Quit: <3]
jgraef2 has joined #ipfs
wowaname has joined #ipfs
jhulten has quit [Ping timeout: 240 seconds]
wowaname has quit [K-Lined]
jgraef2 is now known as jgraef
devbug has joined #ipfs
NightRa has joined #ipfs
jaboja has quit [Ping timeout: 255 seconds]
m0ns00n has joined #ipfs
devbug has quit [Ping timeout: 260 seconds]
The_8472 has quit [Ping timeout: 264 seconds]
m0ns00n has quit [Client Quit]
zorglub27 is now known as maxlath
The_8472 has joined #ipfs
whoisterencelee has joined #ipfs
SebastianCB has quit [Ping timeout: 276 seconds]
m0ns00n has joined #ipfs
unforgiven512 has quit [Quit: ZNC - http://znc.in]
unforgiven512 has joined #ipfs
unforgiven512 has quit [Max SendQ exceeded]
unforgiven512 has joined #ipfs
whoisterencelee has quit [Ping timeout: 252 seconds]
maxlath has quit [Ping timeout: 240 seconds]
JasonWoof has quit [Read error: Connection reset by peer]
M-davidar has quit [Ping timeout: 276 seconds]
hiredman has quit [Ping timeout: 276 seconds]
hiredman has joined #ipfs
brixen has quit [Ping timeout: 276 seconds]
brixen has joined #ipfs
jgraef has quit [Ping timeout: 260 seconds]
M-davidar has joined #ipfs
ilyaigpetrov has joined #ipfs
<ipfsbot> [js-ipfs-api] greenkeeperio-bot opened pull request #188: stream-http@2.0.4 breaks build
maxlath has joined #ipfs
<patagonicus> For an IPNS address to remain resolvable the node that has the private key has to stay online, right? How long is the TTL for published names?
<Kubuxu> patagonicus: 24h
<The_8472> ipns pinning from other nodes is a planned feature afaik
<Kubuxu> Problem is that it would have to change way that IPNS works.
<Kubuxu> (Introduce IPRS and records w/o expiration date).
<The_8472> you can still have expiration dates
<Kubuxu> As currently IMHO IPNS has nothing in common with permanent web.
<The_8472> pinning nodes would simply help with republishing whatever the latest entry is
<The_8472> the original node could still replace them with newer ones
<The_8472> or whoever holds the keypar
<The_8472> *keypair
<patagonicus> Yeah, IPNS pinning would solve other problems I have. :)
<Kubuxu> But my node wanting to resolve it will throw those entries out as they are expired.
<patagonicus> And what's this IPLD I keep hearing about? The repo's readme isn't really helpfull …
<The_8472> well, expiration is orthogonal to 3rd party republishing
<Kubuxu> expiration is included in the signed entry.
<The_8472> yes, but within the expiration interval you can still republish
<Kubuxu> patagonicus: it is a new way to create linked structures.
<The_8472> think 6 months expiration... with 3rd party republish you don't have to stay online for those 6 months
<Kubuxu> you wouldn't but current expiration is hard coded 24h.
<patagonicus> Ok, thanks. :)
<The_8472> yeah, that needs changing. just saying you still can have expiration
<Kubuxu> patagonicus: it will allow for example to make simple and compact git-link graphs.
<Kubuxu> Currently you would have to use files and so on are try to cheat it into existing system, IPLD will allow to do it much easier
<Kubuxu> 6 moths expiration is still not permanent web. I know that you can switch to for example resolved hash but ....
<The_8472> IPNS is mutable. it'll never be permanent
<The_8472> think of it as head-pointer into an append-only data structure
<Kubuxu> I don't see why it can't be permanent. As yo said: it is head-pointer of append-only structure. This structure can be permanent so head pointer can be also.
<The_8472> then you don't need the pointer in the first place
<The_8472> you can just refer to the ipfs hash
<Kubuxu> I need, to find new head.
<The_8472> but if there can be a new one then it's mutable
<Kubuxu> permanent in IPFS means that life of object can be extended beyond original author interest in it.
brianisnotarobot has joined #ipfs
<The_8472> i thought we're talking about ipns?
<Kubuxu> that also applied to IPNS
<Kubuxu> (IPFS as whole project)
<Kubuxu> s/applied/applies
<The_8472> you could keep a log of old revisions of ipns records if you wanted, but someone looking for an ipns name wouldn't know that you were referring to an old instance. if you want to specifically refer to a particular instance you might as well refer to the ipfs hash it resolves to
<The_8472> maybe my imagination is limited, but i see no particular use for keeping old ipns records when you can use the underlying ipfs path anyway
<drathir> in theory ipfs have one bootstrap node added and its cjdns nodeoutside lan and have local lan cjdns ipfs node which one have pinned file the ipfs should detect that one and made a lan local connection?
<Kubuxu> I say that in some cases old IPNS entry is better none.
<The_8472> well, republishing gets you that
<drathir> or that lan node needed to be added into bootstrap list directly?
<Kubuxu> No you won't as it will expire.
<Kubuxu> drathir: both are in cjdns?
<The_8472> you could just ignore expiration
<Kubuxu> Dangerous as in some cases expiration makes sense.
<The_8472> those probably are not the cases where you want to look at old stuff?
<Kubuxu> exacly
<drathir> y correct both are connected to the same cjdns bootstrap node outside lan...
<The_8472> so in those cases you can ignore it
<Kubuxu> drathir: they will find each other, might be faster if you try requesting file available on the other node.
Tristitia has quit [Remote host closed the connection]
<drathir> also lan nodes ofc are peered each other over vcjdns...
<Kubuxu> The_8472: case where expiration makes sense: software updates, you don't want someone to launch delayed replay attack, by separating someone from network and showing him only old entries for given software.
<drathir> k i will make a tests a little how better that workin...
<Kubuxu> There is in works cjdns discovery also (using cjdns' dht to try peering with random nodes).
<The_8472> Kubuxu, unless you want to pull an old version
<Kubuxu> then you can explicitly search records history.
<drathir> Kubuxu: ofc accessing at local gateway...
<Kubuxu> but you don't want Eve filtering entries that Alice gets so she gets old (buggy) software.
<ipfsbot> [js-ipfs-api] Dignifiedquire pushed 1 new commit to master: http://git.io/vzYya
<ipfsbot> js-ipfs-api/master 3afd1bf Friedel Ziegelmayer: Merge pull request #188 from ipfs/greenkeeper-stream-http-2.0.4...
<The_8472> sure, but that's up to the client to verify. that doesn't keep anyone from republishing the records, even if they're expired
<Kubuxu> For example: in case of block chain expiration might be 2x block time. In case of average site: infinite, even if original author stops caring, if someone cares to republish it would live(and URIs are not broken)
Encrypt has quit [Quit: Quitte]
<ipfsbot> [webui] greenkeeperio-bot opened pull request #183: eslint-plugin-react@3.15.0 breaks build ⚠️ (master...greenkeeper-eslint-plugin-react-3.15.0) http://git.io/vzY5I
hoony has joined #ipfs
myrmicid has quit [Read error: Connection reset by peer]
hoony has quit [Ping timeout: 272 seconds]
jhulten has joined #ipfs
nonluminous has joined #ipfs
mildred has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
jandresen has joined #ipfs
jhulten has quit [Ping timeout: 240 seconds]
hellertime has joined #ipfs
computerfreak has joined #ipfs
JasonWoof has joined #ipfs
Senji has quit [Read error: Connection reset by peer]
devbug has joined #ipfs
jgraef has joined #ipfs
jaboja has joined #ipfs
Senji has joined #ipfs
brianisnotarobot has quit [Ping timeout: 252 seconds]
devbug has quit [Ping timeout: 260 seconds]
<ipfsbot> [webui] Dignifiedquire pushed 1 new commit to master: http://git.io/vzOe9
<ipfsbot> webui/master cfd78f0 Friedel Ziegelmayer: Merge pull request #183 from ipfs/greenkeeper-eslint-plugin-react-3.15.0...
<ipfsbot> [webui] greenkeeperio-bot opened pull request #184: react-router-bootstrap@0.20.1 breaks build ⚠️ (master...greenkeeper-react-router-bootstrap-0.20.1) http://git.io/vzOvg
M-davidar has quit [Ping timeout: 250 seconds]
padz_ has joined #ipfs
padz has quit [Ping timeout: 276 seconds]
M-davidar has joined #ipfs
jandresen has quit [Ping timeout: 245 seconds]
Senji has quit [Disconnected by services]
Senj has joined #ipfs
Senj is now known as Senji
Encrypt has joined #ipfs
ljhms has quit [Ping timeout: 260 seconds]
<ipfsbot> [webui] Dignifiedquire pushed 1 new commit to master: http://git.io/vzOLp
<ipfsbot> webui/master 70e6faf Friedel Ziegelmayer: Merge pull request #184 from ipfs/greenkeeper-react-router-bootstrap-0.20.1...
ljhms has joined #ipfs
s_kunk has quit [Ping timeout: 240 seconds]
s_kunk has joined #ipfs
<Kubuxu> lgierth: have you figured out this multiplexing?
<ipfsbot> [webui] greenkeeperio-bot opened pull request #185: Update i18next-browser-languagedetector to version 0.0.14
nonluminous has quit [Read error: Connection reset by peer]
cemerick has joined #ipfs
<lgierth> Kubuxu: it'll be a little go daemon
<Kubuxu> One note that has to be accounted for: keep-alive
<Kubuxu> It has to work on request level not connection level but if you are doing a go program it already will work on http level.
jandresen has joined #ipfs
Encrypt has quit [Quit: Quitte]
* daviddias finally: INTERNET :D
<Kubuxu> home sweet home?
<ansuz> :D
<ansuz> wb daviddias
<ansuz> I don't really celebrate connecting to the internet anymore, though
<ansuz> only hyperboria
<daviddias> Loustic coffee shop, close to our initial plan Anticafe BeauBorg
Guest39845 has joined #ipfs
<ansuz> beaubourg, eh?
<ansuz> interesting, not actually on beaubourg
Encrypt has joined #ipfs
<ansuz> k, so you're about 3 minutes walk from here
reit has joined #ipfs
anon3459 has quit [Ping timeout: 260 seconds]
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to feat/mfs-flush-cmd: http://git.io/vzOcM
<ipfsbot> go-ipfs/feat/mfs-flush-cmd 39db30c Jeromy: add context to files reading...
moreati has quit [Ping timeout: 240 seconds]
jandresen has quit [Ping timeout: 256 seconds]
<daviddias> Anticafe has nicer wifi, but this is not bad as well
<Kubuxu> daviddias: you were in Kraków?
<daviddias> never been there, why?
<Kubuxu> ahh, someone else added place to your list there.
<ansuz> daviddias: xwiki is pretty great for hacking
<ansuz> come for the ansuz, stay for the babyfoot
<daviddias> babyfoot?
<daviddias> Kubuxu: list is shared, everyone can PR :)
moreati has joined #ipfs
<moreati> AFAICT There's no Debian or Ubuntu packages for IPFS. Is that because no one's done it yet? The project doesn't want them (yet)? Other? (sorry if this is a dupe, wifi weirdness at my end)
<whyrusleeping> moreati: we wouldnt mind a package in debian/ubuntu
<moreati> I'll give it a go then :)
<whyrusleeping> moreati: wonderful, thank you :)
ELFrederich_ has joined #ipfs
pfraze has joined #ipfs
ELFrederich_ is now known as ELFrederich
chriscool has quit [Quit: Leaving.]
zoobab has joined #ipfs
<zoobab> hi
<zoobab> working on an openwrt port
<zoobab> so that we can have cheap IPFS devices with a simple NET2USB
<zoobab> and you plug a hard drive at its back
jholden_ has joined #ipfs
jholden has quit [Ping timeout: 256 seconds]
the193rd has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to master: http://git.io/vzOgu
<ipfsbot> go-ipfs/master b7d35e3 Jeromy: IPFS Versions 0.3.11 release...
jhulten has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping tagged v0.3.11 at master: http://git.io/vzOgV
<ipfsbot> [go-ipfs] whyrusleeping merged master into release: http://git.io/vzOgH
<whyrusleeping> choo choo!
<lgierth> zoobab: i'm about to head out, but let me know if you need help or have questions about openwrt
<lgierth> and be aware that ipfs requires go1.5, so gcc-go won't work
<lgierth> the little mips routers are also not very good at crypto ;)
<lgierth> (there are more powerful arm-based openwrt devices now of course)
<achin> everybody on the release train!
elima has quit [Ping timeout: 255 seconds]
<lgierth> You pay 5 euros per hour and everything on the menu is free.
<lgierth> nice
<lgierth> ok gotta run
<whyrusleeping> dignifiedquire: heyo, you around?
<dignifiedquire> whyrusleeping: I'm afraid so
jhulten has quit [Ping timeout: 260 seconds]
maxlath has quit [Ping timeout: 272 seconds]
brian has joined #ipfs
brian is now known as Guest34111
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
<dignifiedquire> whyrusleeping: whats up?
<zoobab> go on opewnrt seems to be not included in trun
<whyrusleeping> dignifiedquire: anything i can do to help distributions along?
<zoobab> trunk
Guest34111 is now known as otherbrian
<otherbrian> hi, i have a question about the development process...
<whyrusleeping> otherbrian: i probably have an answer
<otherbrian> i noticed that the main project (e.g. go-ipfs) contains subprojects (e.g. go-libp2p) and the subprojects are often ahead
<otherbrian> a) where should development happen?
<otherbrian> b) when do these merge?
<otherbrian> c) how do they merge?
<whyrusleeping> otherbrian: that depends on what youre working on
<whyrusleeping> generally, we do any work on the subproject that we want to change
<whyrusleeping> and then those changes are tested and merged in that project
<dignifiedquire> whyrusleeping: these two things from last night
<whyrusleeping> then once those are in, we can do a pull request to update the dependencies into the main project
<dignifiedquire> dignifiedquire> 2 small things for dists before I can publish a preview to ipfs with downloads working, the links in the architecture don't have the correct ending, now that everything is ziped
<dignifiedquire> 21:54 <dignifiedquire> and the other is that the "id" for go-ipfs is "ipfs" but it needs to be "go-ipfs
<whyrusleeping> dignifiedquire: ah, cool. i'll do that right now
jandresen has joined #ipfs
<dignifiedquire> whyrusleeping: make sure to pull before I just pushed some stuff I did last night
<dignifiedquire> whyrusleeping: making the PR for the new webui now
<whyrusleeping> dignifiedquire: having the id be 'go-ipfs' is mildly obnoxious...
<whyrusleeping> i use the name of the binary as the id
<dignifiedquire> hmm
<whyrusleeping> i guess i could just use the name of the dist folder
<dignifiedquire> but they should be different, and if we add js-ipfs what do you do then? :P
<whyrusleeping> inferior-language-ipfs ;)
<dignifiedquire> lol
<whyrusleeping> slow-ipfs
<whyrusleeping> single-threaded-ipfs
<dignifiedquire> 1 million module ipfs
<whyrusleeping> javascripts are a great language, i'm sure that all will be fine
<whyrusleeping> i'll change the tag to be go-ipfs
<dignifiedquire> thanks :)
<dignifiedquire> whyrusleeping: fs-repo-migrations doesn't like me
<whyrusleeping> join the club
<dignifiedquire> I have an 0.4 repo and try to run master
maxlath has joined #ipfs
<dignifiedquire> whyrusleeping: so how do I make it to go to 2
elima has joined #ipfs
<whyrusleeping> uhm
<whyrusleeping> you don't want to do that
<dignifiedquire> :(
<dignifiedquire> ok
<whyrusleeping> go get github.com/ipfs/fs-repo-migrations/ipfs-2-to-3
<dignifiedquire> but ipfs daemon teslls me to use fs-repo-migrations
<whyrusleeping> ipfs-2-to-3 -revert
<dignifiedquire> I see
<whyrusleeping> hrm... i guess i could throw reverting into that binary real quick
<whyrusleeping> i had intended to keep that nonsense away from end userse
<whyrusleeping> but whatever
<dignifiedquire> makes sense
<achin> backups!
<dignifiedquire> leave it out
<whyrusleeping> leave it out? c'est bon
warptangent has quit [Ping timeout: 240 seconds]
<C-Keen> I would like to omit the fuse feature, is there a way to disable the build of that?
<whyrusleeping> C-Keen: go build ./cmd/ipfs/ -tags=nofuse
<whyrusleeping> i think
<dignifiedquire> whyrusleeping: hmm not working
<C-Keen> whyrusleeping: thanks
<dignifiedquire> ipfs-2-to-3 -revert -path ~/.ipfs
<dignifiedquire> error: cannot load indirect pins: merkledag: not found
<whyrusleeping> dignifiedquire: dude, that sounds like a personal problem
<dignifiedquire> lol
<dignifiedquire> I'm just gonna nuke it
<dignifiedquire> who needs migrations if there is init -f
<whyrusleeping> ^^^ this guy
<achin> > zfs snapshot storage/home/achin/.ipfs@premigration
<whyrusleeping> achin: not a bad idea
<whyrusleeping> although i've tested the going forward quite well
<whyrusleeping> (i
<whyrusleeping> (i'm actually looking into what dignifiedquire encountered here)
ashark has joined #ipfs
jhulten has joined #ipfs
warptangent has joined #ipfs
<otherbrian> whyrusleeping: does IPFS rely on STUN at all? or is everything taken care of with the nat library?
<whyrusleeping> otherbrian: we don't use STUN
<otherbrian> reading through the specs it says you can use it but i don't see it in the go implementation
<whyrusleeping> right, its on the 'todo' list
<whyrusleeping> but its not there yet
<whyrusleeping> as its quite tricky to do in a way that works for us
<whyrusleeping> we have our own ip address discovery protocol
<whyrusleeping> and a few other nat traversal utilities
<whyrusleeping> we don't have any sort of tunneling yet though
<otherbrian> beyond NAT-PMP and UPnP?
<whyrusleeping> yeah, we do tcp reuseport
<whyrusleeping> in 0.4.0 we use udp for hole punching
<whyrusleeping> and our dht helps us with address discovery
<The_8472> use ipv6 today and stop worrying!
<otherbrian> :)
<The_8472> </psa>
<ipfsbot> [webui] greenkeeperio-bot opened pull request #186: i18next@2.0.19 breaks build ⚠️ (master...greenkeeper-i18next-2.0.19) http://git.io/vzOyO
Matoro_ has quit [Ping timeout: 256 seconds]
Encrypt has quit [Quit: Quitte]
<ipfsbot> [webui] greenkeeperio-bot opened pull request #187: stream-http@2.0.5 breaks build ⚠️ (master...greenkeeper-stream-http-2.0.5) http://git.io/vzOSt
<whyrusleeping> dignifiedquire: pushed distributions updates
<whyrusleeping> if you want to PR a new webui i can add that to 0.3.11 as well
<dignifiedquire> whyrusleeping: working on the webui now
<dignifiedquire> nearly done
<ipfsbot> [js-ipfs] diasdavid created status/update-jan-12-2016 (+1 new commit): http://git.io/vzOHY
<ipfsbot> js-ipfs/status/update-jan-12-2016 949d00c David Dias: update project roadmap, make it more explicit, help organize
<ipfsbot> [js-ipfs] diasdavid opened pull request #47: update project roadmap, make it more explicit, help organize (master...status/update-jan-12-2016) http://git.io/vzOHc
<otherbrian> @whyrusleeping so in the tcp reuseaddr/reuseport case, you are just connecting to a known peer endpoint (from bootstraping) and then continuing to use that open socket for everything else?
chriscool has joined #ipfs
Matoro_ has joined #ipfs
<whyrusleeping> otherbrian: basically, yeah
Tristitia has joined #ipfs
Matoro_ has quit [Ping timeout: 240 seconds]
<otherbrian> and instead of TURN servers, relaying is done through other peers?
<achin> at the moment, i don't believe ipfs does any relaying
<otherbrian> it's mentioned in the p2p spec but perhaps not implemented?
<achin> right. that's my understanding
<richardlitt> whyrusleeping: how do I update to 0.4.0, again?
<achin> richardlitt: code or ipfsrepo?
<richardlitt> achin: I don't know the difference.
<richardlitt> I'm trying to update the HTTP API, and I want to go off of the 0.4.0 functionality and use 0.4.0 man docs
<achin> otherbrian: i would be very happy to learn that i'm wrong
<richardlitt> achin: What is the difference?
<achin> well, to build ipfs, it's just: checkout dev0.4.0 branch and "go build".
<achin> but 0.4.0 has a different repo format, so you either have to upgrade your $IPFS_PATH, or run 0.4.0 with a new $IPFS_PATH
<dignifiedquire> !pin QmRyWyKWmphamkMRnJVjUTzSFSAAZowYP4rnbgnfMXC9Mr
<pinbot> now pinning /ipfs/QmRyWyKWmphamkMRnJVjUTzSFSAAZowYP4rnbgnfMXC9Mr
zen|merge_ has quit [Remote host closed the connection]
<achin> (personally, i have two $IPFS_PATHs, one for 0.3.x and one for 0.4.0, so i can run both nodes at the same time)
pfraze has quit [Remote host closed the connection]
<richardlitt> Why would I want to build not upgrade?
<richardlitt> Can you help explain the difference?
<dignifiedquire> whyrusleeping: https://github.com/ipfs/webui/pull/188
<whyrusleeping> richardlitt: you can use ipfs-update
<ipfsbot> [webui] Dignifiedquire opened pull request #188: Fixes and release of a new version for 0.3.11 (master...0-3-11) http://git.io/vzOxr
<richardlitt> Code and IPFSrepo seem the same, to me.
<whyrusleeping> its the easiest way IMO
<richardlitt> whyrusleeping: Should I be using v0.4.0-dev for building the API?
<whyrusleeping> yes
<otherbrian> thanks for the info guys, just starting to dig around. super interesting project.
<ipfsbot> [go-ipfs] Dignifiedquire opened pull request #2188: feat: Update to the latest version of the webui (master...feat/webui-update) http://git.io/vzOpj
<whyrusleeping> otherbrian: bear with us for a little bit, we're working on a transition for merging in a new version with lots of changes
<whyrusleeping> so there might be a bit of mess during the process
<dignifiedquire> whyrusleeping: and https://github.com/ipfs/go-ipfs/pull/2188
<dignifiedquire> please test
zen|merge has joined #ipfs
zen|merge has quit [Changing host]
zen|merge has joined #ipfs
Thexa4 has joined #ipfs
<achin> richardlitt: sorry, i don't think i properly understood your question. but if you're no longer confused, then no need to explain it to me :)
M-roblabla1 has joined #ipfs
<richardlitt> achin: I'm confused by your answer, but not my question :)
<otherbrian> will do. i've been day-dreaming about something sort of like this for a while so it's cool to find it and see that it's really happening. hope to find something i can contribute to after i find my way around.
<richardlitt> whyrusleeping: works, thanks.
<achin> if you're confused by my answer, that means i was confsued by your question! this is starting to feel like a monday morning
<ipfsbot> [webui] Dignifiedquire closed pull request #187: stream-http@2.0.5 breaks build ⚠️ (master...greenkeeper-stream-http-2.0.5) http://git.io/vzOSt
mildred has quit [Quit: Leaving.]
<ipfsbot> [webui] Dignifiedquire pushed 1 new commit to master: https://github.com/ipfs/webui/commit/795a71b73784dc1a7245c5276edd049440c86fe9
<ipfsbot> webui/master 795a71b Friedel Ziegelmayer: Merge pull request #186 from ipfs/greenkeeper-i18next-2.0.19...
<dignifiedquire> richardlitt: can you please test the new webui with 0.3.11 as well
<Thexa4> Hi, I'm trying to use ipfs in an offline environment and can't seem to find documentation about caching of ipns records. Do I need to be able to reach the original node to resolve an ipns key?
zen|merge has quit [Read error: Connection reset by peer]
devbug has joined #ipfs
<otherbrian> whyrusleeping: twitter says you are in seattle?
<ipfsbot> [webui] Dignifiedquire closed pull request #185: Update i18next-browser-languagedetector to version 0.0.14
<whyrusleeping> otherbrian: well, normally yes
<otherbrian> whois says georgia
<otherbrian> :P
<whyrusleeping> je suis a paris
<whyrusleeping> for now
<otherbrian> whois is very wrong then!
<whyrusleeping> quite!
<otherbrian> well cool, i am in seattle
<achin> whyrusleeping is a world-traveler
<whyrusleeping> otherbrian: awesome, might have to meet up when i get back for some coffee
<otherbrian> we do it best
<whyrusleeping> havent found better
<richardlitt> whyrusleeping: https://github.com/ipfs/go-ipfs/issues/2189
<whyrusleeping> damn, that loaded fast
<otherbrian> take care, see you around.
otherbrian has quit [Quit: Page closed]
<richardlitt> whyrusleeping: can you help me with this? https://gist.github.com/RichardLitt/16dbf55f05f83846854d
<whyrusleeping> ifps daemon?
<whyrusleeping> go get -u github.com/ipfs/fs-repo-migrations
<richardlitt> heh, I have an alias
<richardlitt> Cool. Thanks.
devbug has quit [Ping timeout: 272 seconds]
<whyrusleeping> lol
chriscool has quit [Quit: Leaving.]
vijayee has joined #ipfs
<vijayee> is there a node merkledag datastructure for ipfs implemented?
<whyrusleeping> vijayee: i beleive so
<whyrusleeping> daviddias: was working on that
rombou has joined #ipfs
<daviddias> merkledag as in with protobufs -> WIP
<vijayee> oh sweet, I started working on one from the go version
<vijayee> but I thought I should ask before I roll out my reinvented wheel
<vijayee> they see me rollin...
<vijayee> daviddas: is it far a long? what is needed?
<tperson> If you change the way the merkle tree is built that changes the hash, it would also change the way you destruct the underlying file too. Is that a versioned/named system?
<dignifiedquire> daviddias: please test the new webui in the PR as well
<vijayee> daviddias: screwed up the name
<daviddias> vijayee cool! where can I check that? I started it recently, spec here: https://github.com/ipfs/specs/pull/57, impl here: https://github.com/ipfs/js-ipfs-data-importing (only published the chunker so far)
O47m341 has joined #ipfs
<whyrusleeping> dignifiedquire: it all works good for me
elima has quit [Ping timeout: 265 seconds]
<dignifiedquire> whyrusleeping: :) sneeked in location fixes as well, as ipfs-geoip is back working :)
<whyrusleeping> wheeee :)
<whyrusleeping> lets ship it
<whyrusleeping> daviddias isnt going to test
<dignifiedquire> okay I pinned it with pinbot, where should we pin it in addition?
<whyrusleeping> uhm
<whyrusleeping> pinbot should be good enough
<dignifiedquire> okay
<whyrusleeping> add the has to...
<whyrusleeping> hash*
<dignifiedquire> whyrusleeping: done
<dignifiedquire> merge when you are happy
<ipfsbot> [webui] Dignifiedquire pushed 1 new commit to master: http://git.io/vz3q8
<ipfsbot> webui/master 03dfa88 Friedel Ziegelmayer: Merge pull request #188 from ipfs/0-3-11...
<ipfsbot> [go-ipfs] whyrusleeping pushed 2 new commits to master: http://git.io/vz3qX
<ipfsbot> go-ipfs/master 7070b4d Jeromy Johnson: Merge pull request #2188 from Dignifiedquire/feat/webui-update...
<ipfsbot> go-ipfs/master ab61ef2 dignifiedquire: feat: Update to the latest version of the webui...
ulrichard has quit [Quit: Ex-Chat]
arpu has quit [Ping timeout: 240 seconds]
<ipfsbot> [go-ipfs] whyrusleeping tagged v0.3.11 at master: http://git.io/vzOgV
<whyrusleeping> alright, thats a release
<ipfsbot> [go-ipfs] whyrusleeping merged master into release: http://git.io/vz3qX
arpu has joined #ipfs
<daviddias> wooooooo
<daviddias>
maxlath has quit [Quit: maxlath]
<daviddias> vijayee: looking at your code
jhulten has quit [Ping timeout: 240 seconds]
<richardlitt> daviddias: https://github.com/ipfs/api/pull/17 :)
<vijayee> daviddias: its not too far started yesterday but I'm trying to mirror how the data is structured
<daviddias> you can find a pretty much complete version of IPFS Repo here https://github.com/ipfs/js-ipfs-repo which has the block store (called datastore)
pfraze has joined #ipfs
<richardlitt> dignifiedquire: sorry, was busy trying to get through that PR
<daviddias> vijayee: you can also find more context of where we are going here https://github.com/ipfs/specs/pull/57/files and here https://github.com/ipfs/js-ipfs/issues/41
<daviddias> we can merge our efforts :)
<vijayee> daviddias: thanks that removes some headache I'll check out the spec
<dignifiedquire> richardlitt: no worries, you can file bugs when 3.11 is shipped ;)
jaboja has quit [Ping timeout: 272 seconds]
elima has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping force-pushed dev0.4.0 from cf514d8 to 3224ae0: http://git.io/vLnex
<ipfsbot> go-ipfs/dev0.4.0 694abde David Dias: update version...
<ipfsbot> go-ipfs/dev0.4.0 c0ec377 Tommi Virtanen: pin: Guard against callers causing refcount underflow...
<ipfsbot> go-ipfs/dev0.4.0 a3de9bf Tommi Virtanen: sharness: Use sed in a cross-platform safe way...
anticore has joined #ipfs
jandresen has quit [Quit: Leaving]
jaboja has joined #ipfs
<ipfsbot> [webui] greenkeeperio-bot opened pull request #189: webpack-dev-server@1.14.1 breaks build ⚠️ (master...greenkeeper-webpack-dev-server-1.14.1) http://git.io/vz3ns
Matoro has joined #ipfs
ashark has quit [Ping timeout: 256 seconds]
Tv` has joined #ipfs
<ipfsbot> [webui] Dignifiedquire closed pull request #189: webpack-dev-server@1.14.1 breaks build ⚠️ (master...greenkeeper-webpack-dev-server-1.14.1) http://git.io/vz3ns
<Ape> Would it be possible to add refcounted direct pins?
<Ape> E.g. ipfs pin add <hash1>; ipfs pin add <hash1> ipfs pin rm <hash1>; <hash1> still pinned because there were 2 adds, but only one rm
<Ape> If multiple applications would pin the same hash, I'd like to keep it pinned until there are nothing requiring it
<Ape> An alternative would be to attach a tag or reason to pins so that there may be multiple reasons listed to pin one hash
<Ape> Then you could only remove one "reason": ipfs pin rm --tag "application1" <hash1>
<The_8472> tags sound cleaner
<The_8472> ref counting is tricky
<The_8472> easy to get wrong
<Ape> Yeah
M-david has quit [Quit: node-irc says goodbye]
<Ape> With tags you could also list everything pinned by one application
<The_8472> yesterday someone suggested reflink copies. without pinning ipfs could just keep symlinks to the copied data. if the application decides to delete it the symlink would be dead and ipfs could decide to GC it
<Ape> Or clean up all pins for an application you are uninstalling
SuzieQueue has joined #ipfs
<The_8472> so you could do without any pins at all. symlinks from the ipfs repo to wherever the application stores it would act as opportunistic pins
O47m341 has quit [Ping timeout: 272 seconds]
<Ape> But would ipfs then need to scan all those symlinks all the time to see if they are deleted
<Ape> Or if the data changes
<Ape> Maybe it's enough to notice that when trying to actually read the data (e.g. when another peers requests it)
<The_8472> for example, yeah
<The_8472> or you could keep the symlink as pin + reflink copy as canonical storage. if the symlink is dead kill the reflink copy too
<The_8472> filesystem based pinning
<Ape> reflinking is kinda hard since it requires filesystem support and doesn't work when people have more than one filesystem
<The_8472> it's an optimization, yes. without it you get duplicate storage
<The_8472> but you currently have that anyway
<The_8472> block storage + wherever the files are on disk
voxelot has joined #ipfs
<The_8472> with symlinks only you can do away with that too at the cost of rechecking on read
arpu has quit [Remote host closed the connection]
<Ape> reflinking wouldn't hurt but it adds complexity and requires time to implement and test
jgraef has quit [Ping timeout: 240 seconds]
<The_8472> i think it's worth it if we're talking about terabytes of data
<Ape> And it would only benefit users in rare cases
<The_8472> rare cases such as someone ipfs add'ing their file collections?
<Ape> If you want to store your terabyte collection to IPFS you should just delete the original files after adding
<Ape> And replace them with symlinks to ipfs
<The_8472> then would mean entirely entrusting your data to ipfs
<The_8472> it's not something everyone wants to do
<whyrusleeping> richardlitt: kyledrake wanna do a short post about 0.3.11?
<Ape> True
<whyrusleeping> basically just mention that its the last release before 0.4.0, and maybe elude to 0.4.0 just across the horizon
varav has joined #ipfs
jhulten has joined #ipfs
<The_8472> i would feel more comfortable with having my data on zfs and letting ipfs either get its own CoW view on the data or symlinks.
<Ape> Is 0.3.11 being released today?
<The_8472> and not going through fuse also has performance benefits for samba&co
<Ape> The_8472: That's a good point
moreati has quit [Quit: Leaving.]
<Ape> Symlinking to ipfs instead of reflinking would make backuping very nice, tho.
moreati has joined #ipfs
<Ape> Just backup the symlinks and it works automatically
<Ape> (And remember to pin the files somewhere else)
<Thexa4> Is it possible to sign a hash with my private key using the ipfs binary?
<The_8472> that's the opposite of I want, but to each their own
<Ape> Or perhaps just replace the whole filesystem with ipfs directories and only backup the root hash :)
<whyrusleeping> Are you all ready?
<whyrusleeping> here it comes
* whyrusleeping exit stage left
<ipfsbot> [go-ipfs] whyrusleeping closed pull request #1381: Dev0.4.0 (master...dev0.4.0) http://git.io/vLn3u
<Ape> Cool
<The_8472> no explosions yet
<whyrusleeping> i'm kinda worried it didnt say that i pushed commits to master
<Ape> go-ipfs branch list is so long... Could somebody clean up all merged branches there?
r1k03 has joined #ipfs
* ansuz out
<Kubuxu> Have someone force pushed to 0.4 some time ago?
<Ape> [go-ipfs] whyrusleeping force-pushed dev0.4.0 from cf514d8 to 3224ae0: http://git.io/vLnex
<Kubuxu> I wonder how many PRs broke.
Thexa4 has quit [Ping timeout: 256 seconds]
<dignifiedquire> whyrusleeping: though at the moment there is an issue that 0.4.0 is the latest, though it probably should be 0.3.11 for now?
<dignifiedquire> whyrusleeping: pushed everything, I think we should merge for now
ashark has joined #ipfs
m0ns00n has quit [Quit: undefined]
<Kubuxu> whyrusleeping: should I move my PRs onto master, I will have to recreate them (either way I have to rebase them because there was force-push).
m0ns00n has joined #ipfs
patcon has joined #ipfs
chriscool has joined #ipfs
disgusting_wall has joined #ipfs
r1k03 has quit [Quit: Leaving]
simonv3 has joined #ipfs
chriscool has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
chriscool has quit [Read error: Connection reset by peer]
<ipfsbot> [go-ipfs] Kubuxu opened pull request #2191: Make dns resolve paths under _dnslink.example.com (master...feature/dnslink) http://git.io/vz3Qz
<ipfsbot> [go-ipfs] Kubuxu closed pull request #2184: Make dns resolve paths under _dnslink.example.com (dev0.4.0...feature/dnslink) http://git.io/vzfGs
jaboja64 has joined #ipfs
Encrypt has joined #ipfs
chriscool has joined #ipfs
jaboja has quit [Ping timeout: 256 seconds]
<dyce> oh so the hash is just a pointer, if the file changes the original hash still points to that file?
Matoro has quit [Ping timeout: 256 seconds]
<ipfsbot> [webui] greenkeeperio-bot opened pull request #190: stream-http@2.0.4 breaks build ⚠️ (master...greenkeeper-stream-http-2.0.4) http://git.io/vz3Nr
<ion> Which hash? IPFS is immutable, IPNS is mutable.
<dyce> oh ok, got it
patcon has quit [Ping timeout: 256 seconds]
<whyrusleeping> Kubuxu: don't worry about the PR's i can fix-ish them
The_8472 has quit [Ping timeout: 240 seconds]
The_8472 has joined #ipfs
jfis has joined #ipfs
ygrek has joined #ipfs
s_kunk has quit [Ping timeout: 240 seconds]
jaboja64 has quit [Ping timeout: 256 seconds]
<Kubuxu> I've fixed on that had merge conflicts. I leave the other one for you.
<whyrusleeping> sounds good, throw a comment in it and i'll get to it
<richardlitt> whyrusleeping: I could see what I can do.
<whyrusleeping> dignifiedquire: any way to make go-ipfs be the first entry?
varav has quit []
<whyrusleeping> if not, who cares
<dignifiedquire> not today, but generally yes
<whyrusleeping> mmkay
<whyrusleeping> #fuckitshipit
<dignifiedquire> what about 0.4 vs 0.3?
<whyrusleeping> ehm
<whyrusleeping> lets have it be 0.3.11
devbug has joined #ipfs
anticore has quit [Ping timeout: 245 seconds]
anticore has joined #ipfs
jaboja64 has joined #ipfs
devbug has quit [Ping timeout: 260 seconds]
ispeedtoo has joined #ipfs
computerfreak has quit [Remote host closed the connection]
rombou has quit [Ping timeout: 272 seconds]
<ipfsbot> [webui] greenkeeperio-bot opened pull request #191: Update i18next-browser-languagedetector to version 0.0.12
jamie_k has joined #ipfs
<ipfsbot> [webui] greenkeeperio-bot opened pull request #192: webpack@1.12.11 breaks build ⚠️ (master...greenkeeper-webpack-1.12.11) http://git.io/vzs4o
anticore has quit [Quit: bye]
<dignifiedquire> whyrusleeping: how do you want to handle this, just not build dev0.4.0?
<whyrusleeping> dignifiedquire: uhm... we should continue to build it
<whyrusleeping> isnt there a little config thing to tell it which version is the 'latest' ?
jamie_k has quit [Quit: jamie_k]
jamie_k_ has joined #ipfs
<dignifiedquire> whyrusleeping: at the moment I just take the last line in the "versions" file
<whyrusleeping> hrm... maybe we should have a little field for 'release version' ?
jgraef has joined #ipfs
<dignifiedquire> yeah, maybe generate another file in addition to the "versions" file, called "current" that has as content just the release version?
<whyrusleeping> could do that.
<whyrusleeping> put it in the dist folder?
<ispeedtoo> join
chriscool has quit [Quit: Leaving.]
<dignifiedquire> yes and ideally it would be copied alongside to the folder where the "versions" file resides
chriscool has joined #ipfs
<dignifiedquire> as that is what my code is looking at
<dignifiedquire> (only checking built things)
SuzieQueue has quit [Ping timeout: 256 seconds]
rendar has quit [Ping timeout: 264 seconds]
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
rendar has joined #ipfs
<ipfsbot> [webui] Dignifiedquire closed pull request #192: webpack@1.12.11 breaks build ⚠️ (master...greenkeeper-webpack-1.12.11) http://git.io/vzs4o
<ipfsbot> [webui] Dignifiedquire closed pull request #191: Update i18next-browser-languagedetector to version 0.0.12
chriscool has quit [Ping timeout: 250 seconds]
devbug has joined #ipfs
devbug has quit [Ping timeout: 250 seconds]
O47m341 has joined #ipfs
<ipfsbot> [webui] greenkeeperio-bot opened pull request #193: Update i18next-browser-languagedetector to version 0.0.13
zorglub27 has joined #ipfs
Senji has quit [Read error: Connection reset by peer]
<NightRa> Is there any documentation about IPFS-Files?
<NightRa> What it's supposed to do & how it works
fiatjaf has joined #ipfs
<ispeedtoo> Is there an fast way to trouble shoot why http://localhost:5001/ipfs/.../#/connections does not display anything? from the console the swarm is working fine?
Encrypt has quit [Quit: Quitte]
patcon has joined #ipfs
<whyrusleeping> dignifiedquire: want me to do the current thing?
<whyrusleeping> or do you want to?
<ispeedtoo> ok never mind found the problem curl is very slow!
<Kubuxu> Do we have some recursive scraper for IPFS? I'd like to scrape Lua's site.
m0ns00n has quit [Quit: undefined]
jfis has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<Ape> Kubuxu: Lua's site?
<Kubuxu> I just like Lua.
<Ape> Maybe you could try wget. I don't see how ipfs alone would scrape that
<patagonicus> NightRa: Don't think so. If you download a dev4.0 build that will contain some help (start with ipfs files --help), but that documentation is not complete.
<Kubuxu> I was just asking if there is something ready.
grahamperrin has left #ipfs ["Leaving"]
<Ape> Kubuxu: Kind of ready: wget -rkp -Dwww.lua.org http://www.lua.org
<Ape> + ipfs add -r on the resulting folder
<Kubuxu> Yeah, I was just figuring this out.
otherbrian has joined #ipfs
<patagonicus> There's https://github.com/victorbjelkholm/ipfscrape which also uses wget and go-ipfs.
<dignifiedquire> whyrusleeping: if you can, yes please won't get to anything tonight anymore
<patagonicus> I can see a few problems with the script, but nothing bad. Just don't pass any URLs with spaces (or tabs or newlines) and don't run the script in parallel.
<Kubuxu> It does mirror just given page.
<NightRa> patagonicus: Alright
<Kubuxu> Already adding, thank Ape
<NightRa> Let me ask then: what is the role of IPFS-Files?
<NightRa> What is it supposed to acheive and do?
<patagonicus> Ah, right it's not recursive. I just kinda assumed that.
<Ape> I added it already :D
elima has quit [Ping timeout: 240 seconds]
<Kubuxu> Ape: the lua.ort?
O47m341 has quit [Ping timeout: 240 seconds]
<Ape> Just see my ipfs.io link
<patagonicus> IPFS files let's you modify files and directories in IPFS without using the fuse mount. I *think* that it's faster than the fuse version, but I'm not sure.
<Kubuxu> Ahh, 0.4 is much quicker. :P
<Kubuxu> IPFS Files/MFS It is mostly useful in case of creating directory structures and so on.
<Kubuxu> at least IMHO
<Ape> Wget even converted the links and image src so that everything should be usable with just ipfs
<Kubuxu> Ape: same hash. YEY
<ipfsbot> [webui] Dignifiedquire closed pull request #193: Update i18next-browser-languagedetector to version 0.0.13
<patagonicus> It's still slow when I try to mirror the Alpine repo, but that might just be the server. Not a lot of power and way too much running on it. :/
<Kubuxu> Ape: also we broke encoding.
O47m341 has joined #ipfs
<Kubuxu> Original site is encoded in windows-1252 and IPFS version is in utf-8.
<Kubuxu> This is something that needs to be accounted for.
<Ape> There is meta tag with charset=iso-8859-1 on the site html
<Ape> And ipfs gateway isn't setting the encoding
<Ape> I can't see why this doesn't work
<Ape> Unless the site text isn't really iso-8859-1
<ipfsbot> [go-ipfs] RichardLitt force-pushed feature/shutdown from 8e2c77c to 7e82f49: http://git.io/vui8j
<ipfsbot> go-ipfs/feature/shutdown 7e82f49 Richard Littauer: Edited following @chriscool feedback...
<r4sp> Hello...
fpibb has joined #ipfs
<Kubuxu> It is for some reason FF renders it with UTF-8. Heh
<Kubuxu> s/Heh/Huh
<ipfsbot> [webui] greenkeeperio-bot opened pull request #194: stream-http@2.0.3 breaks build ⚠️ (master...greenkeeper-stream-http-2.0.3) http://git.io/vzGvb
<Ape> We could convert the files to utf-8 after scraping
<r4sp> I'm looking for an opensource project to work in my free time... I saw a few videos and this one seems to be really interesting... Is there another reason to convince me and choose this one? Dont get me wrong...
<Kubuxu> Ape: gateway sends: Content-Type:"text/html; charset=utf-8"
<Kubuxu> Do ctrl+F5
<Ape> Oh.. Well usually utf-8 is the right choice
<Ape> Maybe this would work if it didn't send it
<otherbrian> is anyone familiar with the NAT discovery?
<otherbrian> (within the project)
<Kubuxu> Yeah, I wonder what current browsers will default to.
<Ape> But do browsers default to utf-8? Because we really want to default to it. But if HTML has an encoding we should use that.
<Ape> And it would be quite hacky if the gateway tried to detect the encoding
<Ape> r4sp: This is a very interesting project. Just download 0.4.0 and play with it for a while
<r4sp> Ape: I will try :)
hellertime has quit [Quit: Leaving.]
<Kubuxu> r4sp: It is awesome. Really, might have chance to do something that will change the world (I'm not kidding).
<Kubuxu> s/, /, you /
<Ape> There are lots of p2p cas fs projects nowadays, but this one works already quite nicely
<Ape> And I like the way it's being developed
<Kubuxu> Ape: looks like most browser will find UTF8 if it is used and serving encoding in HTTP header will override it almost always.
<ipfsbot> [webui] Dignifiedquire closed pull request #194: stream-http@2.0.3 breaks build ⚠️ (master...greenkeeper-stream-http-2.0.3) http://git.io/vzGvb
<Ape> So the gateway perhaps shouldn't send the encoding and utf-8 pages would still work?
<Kubuxu> Though, HTML5 recommends that HTML header should have lowest priority, before heuristics.
<whyrusleeping> r4sp: we welcome new contributors :) If you need some convincing, we're doing things that will make your package managers faster
<Kubuxu> Ape: In theory yes, in practice IDK.
<otherbrian> hey @whyrusleeping im back.. another NAT question for you
jfis has joined #ipfs
<whyrusleeping> otherbrian: hit me
reit has quit [Ping timeout: 264 seconds]
<otherbrian> you seem to be one of the last ones on the issue that was most recently closed implementing it
<Ape> Kubuxu: Seems like many browsers still default to windows-1252
<Ape> So the gateway should have utf-8 to override that
<otherbrian> i looked at the nat discovery logic in net
<Ape> But we need to convert mirrored pages to utf-8, too
<otherbrian> it seems like there’s still a gap
<Kubuxu> Ape: problem is that it overrides what is specified in <meta>
<otherbrian> IGP_v1, IGP_v2, and NAT-PMP
<r4sp> whyrusleeping: Just got my new kimsufi server so going to spent a while with it and after that will explore this :)
<Ape> But many documents do not have meta tags, so utf-8 must be specified somewhere
<otherbrian> but you still may be behind a NAT if those guys all fail — so are you hosed, for now?
<Kubuxu> UTF-8 should be detected.
<Kubuxu> In theory. :PP
<whyrusleeping> otherbrian: yeah, pretty much :/
<otherbrian> so the missing piece here is the STUN component
<whyrusleeping> if youre behind a shitty nat, we pretty much rely on outbound connections
<whyrusleeping> i want to have code that lets nodes request a connection from a given node through the dht
<otherbrian> so you mean that you must connect with a peer, and no peer can connect with you
<otherbrian> it’s up to you to join the network
<otherbrian> resources you cache for example won’t be accessible unless youve connected outboud
<Kubuxu> UTF-8 pages can be fixed by including it in <meta> non-UTF-8 would have to be transcoded if we have UTF-8 in HTTP header.
<otherbrian> maybe that’s a bit too simplistic… still trying to understand the protocol
<Kubuxu> It is broken on FF44.0bb7
<Kubuxu> I will check on chrome.
<Ape> Kubuxu: But not all documents are even html. What about txt files?
<Kubuxu> also broken on chrome
<Kubuxu> Detection in theory will work, we would have to check how well. :P
M-eternaleye has joined #ipfs
M-eternaleye has quit [Changing host]
M-eternaleye is now known as eternaleye
<Kubuxu> Ape: UTF-8 with BOM is recognized correctly.
<Kubuxu> There is also lib for charset detection in go. Hmm.
<Kubuxu> Rest is not.
<vijayee> daviddias: crazy question: Does the js-repo-work in browser?
<Ape> Why is fuse read so very slow compared to 'ipfs get'
<Kubuxu> Ape: buffering and some other things
<otherbrian> whyrusleeping: routing happens via DHT, correct?
<Kubuxu> I think it traverses DAG from root, reads 4K, repeat.
<Ape> I think you open a file and then do all reads to it in fuse (or any vfs)
<Ape> And it only evaluates the path when openin
<Ape> I just ran: find . -iname "*.html" -exec iconv --verbose -f iso-8859-1 -t utf-8 -o {} {} \;
<whyrusleeping> otherbrian: yes
luigiplr has joined #ipfs
* luigiplr whistles
<richardlitt> IPFS Weekly #2 PR: https://github.com/ipfs/weekly/pull/12
chriscool has joined #ipfs
<Kubuxu> Ape: I don't think so, it would make opening big files impossible.
<luigiplr> Oh nice the webui is react
<achin> richardlitt: if you are amenable to giving me contributor access to the weekly repo, i could push changes directly to the the feat/jan-12 branch
<otherbrian> so the error you might run into then, being behind a shitty nat such that you aren’t even aware of it, is that someone else might try to connect to you via an entry in the DHT and fail
<Ape> Kubuxu: It doesn't read the data when opening. Only evaluates the path and gives an fd
<otherbrian> which could result in your DAGs not being published… ?
<richardlitt> achin: Cool, will check. Not sure I have the power to do that.
<richardlitt> achin: will ask who does.
<luigiplr> Awe you guys are not full es6
<richardlitt> achin: fixed issues. Thanks. Also, added awesome_bot to check links, think it's a smart move
<achin> you must make a sacrifice to the merkedag gods
<Kubuxu> Ape: but it has to evaluate DAK of file for each (usually 4k) chunk: https://github.com/ipfs/go-ipfs/blob/master/fuse/readonly/readonly_unix.go#L179
<luigiplr> dignifiedquire: i read here your planning on going full es6
<luigiplr> would you be up for some es7 aswell?
<richardlitt> achin: done
<dignifiedquire> luigiplr: stage-0 is already enabled
<dignifiedquire> so yes
<achin> richardlitt: thanks!
<luigiplr> Oh thats tasty.
<luigiplr> Alright ill es6/7 it all
<dignifiedquire> I also really want to use http://redux.js.org/ for data flow, but that's just my secret wish at the moment
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to master: http://git.io/vzGnx
<ipfsbot> go-ipfs/master 3c6a40a Jeromy: remove failed merge tag from pin ls help...
* whyrusleeping pew pew pew
<Ape> Kubuxu: Isn't there some way to optimize this by prefetching the chunk references or something
* dignifiedquire falls down due to being shot three times
<luigiplr> dignifiedquire: Hmmm
<luigiplr> more of a http://alt.js.org/ person myself
<Kubuxu> I have no idea.
<luigiplr> :))
<dignifiedquire> luigiplr: I have not seen alt before I'm afraid
kahiru has joined #ipfs
<luigiplr> and then to wire it into the component..
<luigiplr> Oh shit
<luigiplr> i have not es6ed that component yet
<luigiplr> well thats embarrassing.
<dignifiedquire> :D
<dignifiedquire> I think I can just barely read it ;)
* luigiplr finds different repo im using it on
<richardlitt> daviddias: Can you add the libp2p and js-ipfs notes to the sprint issue, please?
<luigiplr> there we go
<luigiplr> I always found the alt docs to be better than redux's
<luigiplr> may have changed since i started though :D
<luigiplr> (hopefully)
<dignifiedquire> yeah redux docs got a good upgrade end of last year
<luigiplr> ah nice
<dignifiedquire> how is alt handling immutability?
<daviddias> sure, I can c&p that
<luigiplr> im sure i saw a blog post on that somewhere...
<luigiplr> long story short: you can
<luigiplr> fairly easy
<richardlitt> thanks daviddias. Were there any action items?
<dignifiedquire> :)
<dignifiedquire> luigiplr: tbh I have not really used redux before, I've been using reflux but was not really happy with it
<dignifiedquire> so my bias is not super strong
<luigiplr> i could put some alt in while i do the es6/7'ing?
<luigiplr> or perhaps better to make that a alternative PR
<luigiplr> :P
<luigiplr> oh god
<luigiplr> "jquery"
<luigiplr> RIP
<dignifiedquire> there is a PR to rip it out luckily
<luigiplr> :D
<luigiplr> what was it used for?
<dignifiedquire> mostly getting properties of dom elements
<achin> dignifiedquire: any objection to us sharing a link to http://v04x.ipfs.io/ipfs/QmZyvWokPYGg6DrjE6o2V7qhThzZQZ8QCWqdd2U3S75HXC/index.html in the weekly roundup?
<luigiplr> oh.
<luigiplr> meh i will end up redoing his work anyway
<dignifiedquire> but some bad stuff like changing the height of a container as well
<luigiplr> 0/10 would run screaming
otherbrian has quit [Quit: otherbrian]
<dignifiedquire> achin: no not all
<achin> thanks!
<daviddias> richardlitt: that would be my todo list, roadmap is stable (-> all the things ahah ) :)
<dignifiedquire> luigiplr: if you want to go for alt, I really have no objections, but let's make it two PRs so it's easier to finish/review
<luigiplr> Terrific.
<dignifiedquire> anything that gets these api calls out of the components makes me happy tbh
<Ape> Is it possible to inline a file to a directory if the file is smaller than the multihash?
<luigiplr> haha
<luigiplr> dignifiedquire: any plans on adding a page to preview/view the contents of hashs?
<whyrusleeping> Ape: that would be pretty neat
<luigiplr> like a file browser styled thing
<dignifiedquire> well you can already view objects
<dignifiedquire> if you enter a hash at the top in the bar
<luigiplr> docs or it didn't happen.
<dignifiedquire> there are no docs :P
<luigiplr> hah
<luigiplr> haha
<luigiplr> damn no edit on irc is there
<dignifiedquire> for some details on what I have in mind for the future though take a look at https://github.com/ipfs/webui/issues/90
<dignifiedquire> one of my next priorities is to improve on that design draft and improve on the functionality of the individual pages
<luigiplr> Oh that connections screen...
<luigiplr> :thumbs_up:
<luigiplr> you are able to get the agent?
<dignifiedquire> luigiplr: not sure, if not I'll tell whyrusleeping that he needs to implement it :D
<luigiplr> lol
<luigiplr> nice
Encrypt has joined #ipfs
<luigiplr> would you mind altering the folder layout of the project a tad?
<luigiplr> if that was part of the PR that is
voxelot has quit [Ping timeout: 240 seconds]
<dignifiedquire> you can change anyting below app/scripts
<luigiplr> okie dokie
<ipfsbot> [webui] luigiplr opened pull request #195: [WIP] es6/7 refactor (master...feature/es6-7) http://git.io/vzGE8
<dignifiedquire> luigiplr: oh wow, that was quick :)
<luigiplr> :P
<dignifiedquire> luigiplr: but please no semicolons :P
* luigiplr fine
<dignifiedquire> sorry, I'm allergic to them, and we have that as a code style over the whole organization
<luigiplr> haha alright
<dignifiedquire> and also 2 spaces
* luigiplr sets sublime
<dignifiedquire> luigiplr: that's what we follow https://github.com/feross/standard
<dignifiedquire> eslint will scream at you if in doubt ;)
mildred has joined #ipfs
<luigiplr> mmm alrighty
<dignifiedquire> there are some nice sublime packages for it: https://github.com/feross/standard#sublime-text
<luigiplr> Smexy
<whyrusleeping> dignifiedquire: dont use anything feross writes
<whyrusleeping> he secretly likes semicolons
<dignifiedquire> whyrusleeping: easy for you to say, you write go
<dignifiedquire> :D:D
<vijayee> anyone know if protobuf bytes are unmarshalled to a buffer in node?
jholden_ has quit [Read error: Connection reset by peer]
<richardlitt> daviddias: cool, sounds good.
jholden_ has joined #ipfs
zorglub27 has quit [Read error: Connection reset by peer]
zorglub27 has joined #ipfs
otherbrian has joined #ipfs
<dignifiedquire> luigiplr:
<dignifiedquire> http://babeljs.io/repl/#?experimental=true&evaluate=true&loose=false&spec=false&playground=true&code=class%20Table0%20extends%20React.Component%20%7B%0A%0A%20%20%20%20static%20propTypes%3A%20%7B%0A%20%20%20%20%20%20%20%20table%3A%20React.PropTypes.array%2C%0A%20%20%20%20%20%20%20%20children%3A%20React.PropTypes.array%0A%20%20%20%20%7D%3B%0A%7D%0A%0Aclass%20T
<dignifiedquire> able1%20extends%20React.Component%20%7B%0A%0A%20%20%20%20static%20propTypes%20%3D%20%7B%0A%20%20%20%20%20%20%20%20table%3A%20React.PropTypes.array%2C%0A%20%20%20%20%20%20%20%20children%3A%20React.PropTypes.array%0A%20%20%20%20%7D%3B%0A%7D
<dignifiedquire> damnit
<dignifiedquire> luigiplr: http://bit.ly/22YhE7S take a look, you need to use = for propTypes otherwise they won't be included
<luigiplr> Thats...
<luigiplr> odd
<luigiplr> i normally go with =
<luigiplr> oh
<luigiplr> wait
<luigiplr> i read you so wrong
<jgraef> Hey, since python-ipfs-api is outdated and has a lot of bloat to support python2, I reimplemented it here: https://github.com/jgraef/python3-ipfs-api . The lowlevel API is almost finished, we have docs and examples, a high-level merkledag module and more high-level APIs coming soon. I'd appreciate some feedback :)
<Kubuxu> You modify js so it is more usable, I don't like js so much that if I have anything bigger to do in js I go with ScalaJS.
Matoro has joined #ipfs
m0ns00n has joined #ipfs
<dignifiedquire> jgraef: looking pretty nice :)
<ipfsbot> [go-ipfs] RichardLitt force-pushed feature/shutdown from 7e82f49 to 48353a7: http://git.io/vui8j
<ipfsbot> go-ipfs/feature/shutdown 48353a7 Richard Littauer: Edited following @chriscool feedback...
mec-is has joined #ipfs
voxelot has joined #ipfs
<whyrusleeping> you guys better be happy with 0.4.0
<whyrusleeping> i worked on that for like 10 months
<whyrusleeping> thats almost a year
<dignifiedquire> whyrusleeping: we will be :P
<dignifiedquire> but just for a short while
<dignifiedquire> we expect greater things to come
<achin> like giant robot kitties
<luigiplr> lol
danielrf has quit [Quit: WeeChat 1.3]
<dignifiedquire> luigiplr: left some comments ;)
<luigiplr> pfft
<luigiplr> ill add some docs aswell for functions etc
<dignifiedquire> uuhh docs :)
<luigiplr> lol
<dignifiedquire> luigiplr: also one more nitpick, please try to keep lines under 80 characters
* luigiplr slaps dignifiedquire around with a sperm whale
<luigiplr> but sure :P
* dignifiedquire tries to not look to evil
* dignifiedquire too
Matoro has quit [Read error: Connection reset by peer]
Matoro has joined #ipfs
<dignifiedquire> luigiplr: I really appreciate it your work! so please don't take it the wrong way
<luigiplr> haha np
<achin> please think of the whales
<luigiplr> ill do a final project wide indent change back to 2spaces at the end of the task
<luigiplr> well app/scripts wide at least
<luigiplr> :P
<dignifiedquire> :D
chriscool has quit [Quit: Leaving.]
<dignifiedquire> one more thing, no need to change the functional componets to class
<dignifiedquire> or will react treat them the same if they don't inherit from Component?
<luigiplr> it should treat them the same however you would do for icon
<luigiplr> new icon(stuffs)
<luigiplr> rather than
<luigiplr> icon(stuffs)
ispeedtoo has quit [Quit: Page closed]
<dignifiedquire> but I do <Icon />
<dignifiedquire> those are stateless react components
<luigiplr> O
<luigiplr> oopes
<luigiplr> i..
<luigiplr> mmmmmmmm
<luigiplr> best would be as a actual react class but..
<luigiplr> if you dont want it as one :P
<luigiplr> i can leave it
<dignifiedquire> they have less overhead
<luigiplr> mmm
<luigiplr> ah yes
<luigiplr> okay
<dignifiedquire> that's why I prefer them if the component does not need state
<luigiplr> i would have made it just use props?
<luigiplr> oh
<luigiplr> yah okay
<luigiplr> nvm
<luigiplr> ill go back to that and change it
mec-is has left #ipfs [#ipfs]
cemerick has quit [Ping timeout: 250 seconds]
jholden_ has quit [Ping timeout: 246 seconds]
Matoro has quit [Read error: Connection reset by peer]
Matoro has joined #ipfs
elima has joined #ipfs
arpu has joined #ipfs
vijayee has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
pfraze is now known as prf
voxelot has quit [Remote host closed the connection]
voxelot has joined #ipfs
compleatang has joined #ipfs
voxelot has joined #ipfs
voxelot has quit [Changing host]
jholden_ has joined #ipfs
mildred has quit [Ping timeout: 256 seconds]
<feross> whyrusleeping: oh noes! you figured out my secret
joshbuddy has quit [Quit: joshbuddy]
<dignifiedquire> feross: when standard achieved world domination you sneak in an automatic insert semicolons script on every run right?
danielrf has joined #ipfs
ubuntu-mate has joined #ipfs
<ubuntu-mate> hello?
<luigiplr> dignifiedquire: :OOOOOOOOOOOOO
<dignifiedquire> luigiplr: Aaaaaaasa
<luigiplr> I have reached a concusion.
<luigiplr> there needs to be a es5 -> es6 tool.
<luigiplr> T_T
tinybike has joined #ipfs
<ubuntu-mate> i was wondering if ipfs could support FXP
<dignifiedquire> luigiplr: there is esnext
<tperson> luigiplr: What would it do?
<ubuntu-mate> currently i can only think of running it as fuse on my machine and then a ftp server to the same folder
<ubuntu-mate> but it seem problematic
<ubuntu-mate> since the idea of FXP is cloud2cloud
<luigiplr> tperson: take stuff like
<ubuntu-mate> if i DL the content from FTP to my machine
<dignifiedquire> https://esnext.github.io/esnext/#%2F*%0AOn%20the%20left%20is%20some%20legacy%20JavaScript%2C%20and%20on%20the%20right%0Ais%20the%20same%20code%20modernized%20by%20esnext.%0A%0AEdits%20made%20to%20the%20code%20on%20the%20left%20will%20re-generate%0Athe%20code%20on%20the%20right.%20Try%20it%20out!%0A*%2F%0A%0Avar%20memoize%20%3D%20require('lodash').memoize%3B%
<dignifiedquire> 0A%0Avar%20upperCase%20%3D%20memoize(function(string)%20%7B%0A%20%20return%20string.toUpperCase()%3B%0A%7D)%3B%0A
<dignifiedquire> ahh
<dignifiedquire> sorry
<ubuntu-mate> i might as well use sftp and then just hash and host directly with ipfs
<luigiplr> module.exports
<ubuntu-mate> dignifiedquire: ghostbin m8
<luigiplr> and convert to export default
<luigiplr> or
<luigiplr> dignifiedquire: well then.
<tperson> > esnext.convert is not a function
<tperson> :(
<dignifiedquire> Oo
<tperson> First thing that happens for me when I look at the dev console is a Maximum call stack size exceeded lol
<dignifiedquire> loool
<luigiplr> lmfao
<luigiplr> dignifiedquire: you good with docs styled like this: https://github.com/luigiplr/webui/blob/feature/es6-7/app/scripts/views/filelist.js#L19-L24
<luigiplr> ?
joshbuddy has joined #ipfs
<dignifiedquire> I'm afraid people don't want Java in their JavaScript ;)
<dignifiedquire> we had a lengthy discussion about this on ipfs/js-ipfs-api
<luigiplr> PFFFFFFFFFFFFFFFFFT
<ansuz> /java/.test('javascript') // true
<dignifiedquire> and most said they don't want jadoc style comments
* luigiplr mumbles
* luigiplr grubls
* luigiplr stabs dignifiedquire 32 times while no one is looking
* dignifiedquire gets up a 33rd time
zorglub27 has quit [Quit: zorglub27]
<luigiplr> lol
<Kubuxu> dignifiedquire: That is why I code js in ScalaJS. I mostly stopped codding in Java and I am using Scala instead and I can do the same in case of js.
ashark has quit [Ping timeout: 260 seconds]
<Kubuxu> JS in bigger projects gets so messy.
<luigiplr> s/gets/can get
<dignifiedquire> it's possible to handle, but you have to have a good amount of disciplinr in the team
<ubuntu-mate> looks like it has some risks
* ubuntu-mate yawns
<dignifiedquire> luigiplr: by the way when you start thinking about adding alt, the individual pages need to be decoupled, as they will be broken into their own packages at some point
<ubuntu-mate> i just want a way to pass a secure http or ftp file to an ipfs node directly
elima has quit [Ping timeout: 246 seconds]
<ubuntu-mate> ideally i could run my node and then push to the bootstap nodes
jfis has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<Kubuxu> I just feel at home: http://www.scala-js-fiddle.com/ :P
NightRa has quit [Quit: Connection closed for inactivity]
<Kubuxu> ipfs add <(curl https://....)
<ubuntu-mate> i doubt there is a way to push bits from sftp/ftp directly to ipfs
<ubuntu-mate> it seems kinda of backwards
<ubuntu-mate> since the files need to be turned into blocks
<ubuntu-mate> and hashed
<Kubuxu> You can add them file after file directly and then form directories using new Files/MFS API
<ubuntu-mate> oh a new api?
<ubuntu-mate> what is MFS?
<Kubuxu> Yup in 0.4 (today merged into master after release of 0.3.11). It allow you to create and modify directory/files structure operating only on hashes and names.
<Kubuxu> MFS - mutable file system
devbug has joined #ipfs
<ubuntu-mate> neat ;)
<ubuntu-mate> um btw does benet have a offical YT channel?
<ubuntu-mate> oops i was in the go git
<whyrusleeping> note: until 0.4.0 is officially released, use the files api with a bit of care. there are some things i havent quite flushed out with regards to the `--flush` flag
<Kubuxu> So you add files then: ipfs files mkdir /mysite; ipfs files cp /ipfs/Qm...AAA /mysite/20160113/; ipfs files cp /mysite/20160113/ /mysite/current
<ubuntu-mate> voxelot: thanks
<Kubuxu> only first operation takes time
<voxelot> shut up whyrusleeping, files api is great and everyone should trust it with their kids lives
<ubuntu-mate> oh that one
<Kubuxu> not first operation but addition of files.
<ubuntu-mate> >last video 7 months ago
<ubuntu-mate> i guess he's too busy to make vlogs about code
<whyrusleeping> voxelot: mmmm, well when you put it that way...
<voxelot> but seriously i have been hammering the api with javascript apps and it never breaks
<whyrusleeping> well thats good to hear!
<whyrusleeping> lol
<ipfsbot> [webui] greenkeeperio-bot opened pull request #196: jquery@2.2.0 breaks build ⚠️ (master...greenkeeper-jquery-2.2.0) https://github.com/ipfs/webui/pull/196
<voxelot> might just be what im using it for... but yeah cp, rm, stat, all good to me
<voxelot> one thing i have noticed lately is that after a publish the dht resolves back old records for a few seconds, and only sometimes
computerfreak has joined #ipfs
Matoro has quit [Read error: Connection reset by peer]
Matoro has joined #ipfs
joshbuddy has quit [Quit: joshbuddy]
jholden_ has quit [Ping timeout: 264 seconds]
kahiru has quit [Ping timeout: 246 seconds]
tinybike has quit [Quit: Leaving]
<feross> dignifiedquire: that would be a huge troll, and good fun!
<dignifiedquire> with eslint autofix it would be so easy to do :D :D
Not_ has joined #ipfs
joshbuddy has joined #ipfs
jaboja64 has quit [Remote host closed the connection]
devbug has quit [Remote host closed the connection]
danielrf has quit [Quit: WeeChat 1.3]