pfraze_ has quit [Read error: Connection reset by peer]
disgusting_wall has quit [Quit: Connection closed for inactivity]
pfraze_ has joined #ipfs
jrabbit has quit [Changing host]
jrabbit has joined #ipfs
Matoro has quit [Ping timeout: 256 seconds]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
pfraze__ has joined #ipfs
pfraze_ has quit [Read error: Connection reset by peer]
Matoro has joined #ipfs
simonv3 has quit [Quit: Connection closed for inactivity]
jgraef2 has joined #ipfs
<Shibe>
if I have an ipfs folder and a daemon is running there, do I have to ipfs add all the files?
<Shibe>
or will it just serve them whenever somebody needs it
<achin>
when you add a folder to ipfs resursively (with the -r flag) every file in that folder is also added to ipfs
jgraef has quit [Ping timeout: 272 seconds]
<lgierth>
Shibe: it doesn't care about the directory it's running in
<lgierth>
only what you add, or fetch from others, will be stored
<Shibe>
so I have to add every file there?
<lgierth>
add -r
<Shibe>
okay
<Shibe>
lgierth: if I ipfs get a file, is it automatically added?
<achin>
it is automatically cached by your node, but might be deleted later if you run "Ipfs repo gc"
pfraze__ has quit [Read error: Connection reset by peer]
<Shibe>
ok
pfraze_ has joined #ipfs
libman has quit [Ping timeout: 260 seconds]
pfraze_ has quit [Read error: Connection reset by peer]
pfraze_ has joined #ipfs
libman has joined #ipfs
HastaJun has quit [Ping timeout: 265 seconds]
libman1 has joined #ipfs
libman has quit [Ping timeout: 240 seconds]
voxelot has quit [Ping timeout: 264 seconds]
mec-is has quit [Ping timeout: 252 seconds]
pfraze has joined #ipfs
pfraze_ has quit [Read error: Connection reset by peer]
<lgierth>
ok friends, i need the combined wisdom of #ipfs -- on a server, i want to send the same incoming http requests to two different backends simultaneously, and hand give the first "successful" response to the client. any suggestions? nginx and haproxy both seem to not support this kind of multiplexing. does anyone know of other http servers that do support this, or tcp-level tools?
<brimstone>
lgierth: a short timeout isn't sufficient?
<lgierth>
we don't really have a notion of a timeout in this case
<brimstone>
i guess because the request might take a while
<lgierth>
yeah
<lgierth>
the two backends are ipfs-0.3.x and ipfs-0.4.0 :)
<lgierth>
we want to have ipfs.io backed by both for a while
<noffle>
lgierth: just plain http requests? maybe just whip up a quick tiny http proxy server in go or node (can help ya with node side!)
pfraze has quit [Read error: Connection reset by peer]
<brimstone>
that's my reaction, since i can't find anything by googling either
<lgierth>
noffle: yeah i think i'm gonna go with that if nobody brings up something good until tomorrow :)
<achin>
though i've tried to create other hashes, but ipfs generally failed to work wiht them
<lgierth>
yeah? like create something with a non-Qm hash on one host, fetch it on the other?
<lgierth>
i would have expected that to work
<achin>
yeah
<lgierth>
wanna file an issue? we should get that fixed
<jbenet>
achin can you make a test case for that?
<achin>
sure thing. low priority, i'll get to it something this week (i need to retest with the latest master and dev040
<jbenet>
davidar: hello :) -- q: how many "files" do you think there are in all the archives stuff-- super roughly, order of magnitude estimate
<achin>
i believe all i did was calculate the new hash and then copy a file from $IPFS_PATH/blocks into its new home ('new home' == new path based on the new hash)
<M-davidar>
jbenet (IRC): ooh, good question, I'd say in the millions, perhaps?
<lgierth>
openwrt alone is a few 100k (although that's not completely added yet)
<achin>
i alone have more than 1 million files on my nodes
<lgierth>
jbenet: use kg as the unit
<lgierth>
nobody will notice!!11
<achin>
bits are heavy, yo
<lgierth>
it's big data after all
<M-davidar>
achin (IRC): so 1-10M in total maybe?
<achin>
(doing a find|wc-l at the moment to get a more accurate list of the two biggest repos i have)
<achin>
sure, that seems about right
<achin>
i need faster disksssss
<achin>
at the moment, a lot of the archives are just single-file data dumps. like the GPG database is 7.7GB but only in a few hundred files
<achin>
but between the cdnjs archive and the wikispecies archive, i'm hosting 2646000 files
jgraef2 has quit [Ping timeout: 255 seconds]
<jbenet>
davidar achin thanks :) -- I estimated we have 100M-1B files on the network atm. I don't think I'm too far off.
<achin>
M-davidar: btw i was glad to see that you opened #53. some of the hashs that i've collected (and patched into my archives tree) seem to have disappeared from the network
<achin>
having the ipfs org able to pin some of this data (at least for the short-term) would be great
grahamperrin has joined #ipfs
<lgierth>
achin: we should referenence all the archives under ipfs.io/refs eventually
<M-davidar>
achin: awesome, I was hoping you'd be on board with that :)
<achin>
yeah. do you all have a mechanism to keep ipfs.io/refs updated?
<lgierth>
github.com/ipfs/refs
<M-davidar>
lgierth (IRC): yeah, once we get them organised into a single tree it should be pretty trivial to get it into refs
<lgierth>
it's supposed to build the whole /refs tree from all kinds of sources
<lgierth>
or that yeah
<achin>
does that repo accept PRs? does it make sense to start to build up a /refs/archives playground?
<lgierth>
sure
<achin>
(there is at least 1 repo that i can keep easily keep updated daily, and pinned)
<lgierth>
deploying it to ipfs.io/refs is a bit of a hassle at the moment because it involves updating TXT refs.ipfs.io
<lgierth>
but other than that it's cool
dignifiedquire has quit [Quit: Connection closed for inactivity]
<lgierth>
re: deployment, you can drop me an issue in ipfs/ops-requests for simple requests like that, e.g. "update refs.ipfs.io to Qmfoobar please"
<M-davidar>
achin (IRC): that would be awesome :D
<achin>
do i just plop something in versions/current? my quick skim of this repo is failing to fully grok it
<lgierth>
yeah most of the code is for building a dag from the refs-denylists-dmca repo
<lgierth>
a bit of it is also for handling the "previous" link
NightRa has quit [Quit: Connection closed for inactivity]
myrmicid has joined #ipfs
jhulten has quit [Ping timeout: 245 seconds]
dignifiedquire has joined #ipfs
joshbuddy has quit [Quit: joshbuddy]
joshbuddy has joined #ipfs
SebastianCB has joined #ipfs
Encrypt has joined #ipfs
computerfreak has quit [Remote host closed the connection]
Matoro_ has joined #ipfs
Matoro has quit [Ping timeout: 240 seconds]
<SebastianCB>
Not that ipfs is lacking potential but here two glossy articles that could serve as reference for databases in a scientific context (and centralization issues):
<ansuz>
I agree with most of the points, but actually find js to be a wonderful language
<ansuz>
development practices mostly do suck
<ansuz>
and dependency graphs are often absurd in the ecosystem
cryptix has joined #ipfs
<ansuz>
the problems are mostly pretty easy to avoid assuming you have control over which packages you use
joshbuddy has joined #ipfs
zorglub27 has joined #ipfs
M-victorm has quit [Ping timeout: 240 seconds]
M-victorm has joined #ipfs
voxelot has quit [Ping timeout: 240 seconds]
M-edrex has quit [Ping timeout: 240 seconds]
M-edrex has joined #ipfs
rektide has quit [Ping timeout: 240 seconds]
rschulman has quit [Ping timeout: 240 seconds]
chriscool has quit [Ping timeout: 250 seconds]
M-nated has quit [Ping timeout: 250 seconds]
M-rschulman1 has joined #ipfs
M-nated has joined #ipfs
M-eternaleye has quit [Ping timeout: 250 seconds]
<dignifiedquire>
ansuz: yeah, but you do have only a limited control over the packages often and this notion of splitting projects into such small pieces that when you start installing them you actually need a list of modules to install before they do anything is just getting rediculous
m0ns00n has quit [Quit: undefined]
kaliy has quit [Ping timeout: 250 seconds]
chriscool has joined #ipfs
<dignifiedquire>
I enjoy writing react and es6 and javascript a lot, but the tooling and complexity is just going through the roof, often without any real benefit
M-davidar has quit [Ping timeout: 240 seconds]
rektide has joined #ipfs
kaliy has joined #ipfs
moreati has joined #ipfs
M-eternaleye has joined #ipfs
<ansuz>
yea
<ansuz>
personally I have complete control over everything I use
<ansuz>
and usually I write from scratch
<ipfsbot>
[js-ipfs-api] Dignifiedquire created greenkeeper-stream-http-2.0.3 (+1 new commit): http://git.io/vzYmj
<ipfsbot>
js-ipfs-api/greenkeeper-stream-http-2.0.3 c5c7086 greenkeeperio-bot: chore(package): update stream-http to version 2.0.3...
<ansuz>
I was writing scheme before I got into js
M-davidar has joined #ipfs
<ansuz>
sometimes the tiny modules are perfect, sometimes they're stupid
<ansuz>
I see a lot of projects with lodash, underscore, and a few other similar libs in their tree
<whyrusleeping>
dignifiedquire: thats a great article
<ipfsbot>
[js-ipfs-api] Dignifiedquire deleted greenkeeper-stream-http-2.0.3 at c5c7086: http://git.io/vzY30
jfis has quit [Read error: Connection reset by peer]
<dignifiedquire>
whyrusleeping: I knew you'd enjoy that one
bedeho has joined #ipfs
<ipfsbot>
[js-ipfs-api] Dignifiedquire created greenkeeper-stream-http-2.0.4 (+1 new commit): http://git.io/vzY3o
<ipfsbot>
js-ipfs-api/greenkeeper-stream-http-2.0.4 0a2703e greenkeeperio-bot: chore(package): update stream-http to version 2.0.4...
<ansuz>
I often feel like I'm the only nodejs dev who's just writing on top of http
<ansuz>
express is everywhere
<ipfsbot>
[js-ipfs-api] Dignifiedquire created greenkeeper-stream-http-2.0.5 (+1 new commit): http://git.io/vzYsT
<ipfsbot>
js-ipfs-api/greenkeeper-stream-http-2.0.5 67d15a6 greenkeeperio-bot: chore(package): update stream-http to version 2.0.5...
<The_8472>
"perfection is achieved not when there is nothing more to add, but when there's nothing left to take away"
cryptix has quit [Ping timeout: 250 seconds]
jhulten has joined #ipfs
<ansuz>
inside every large program is a small program struggling to get out
whoisterencelee has quit [Ping timeout: 252 seconds]
maxlath has quit [Ping timeout: 240 seconds]
JasonWoof has quit [Read error: Connection reset by peer]
M-davidar has quit [Ping timeout: 276 seconds]
hiredman has quit [Ping timeout: 276 seconds]
hiredman has joined #ipfs
brixen has quit [Ping timeout: 276 seconds]
brixen has joined #ipfs
jgraef has quit [Ping timeout: 260 seconds]
M-davidar has joined #ipfs
ilyaigpetrov has joined #ipfs
<ipfsbot>
[js-ipfs-api] greenkeeperio-bot opened pull request #188: stream-http@2.0.4 breaks build
maxlath has joined #ipfs
<patagonicus>
For an IPNS address to remain resolvable the node that has the private key has to stay online, right? How long is the TTL for published names?
<Kubuxu>
patagonicus: 24h
<The_8472>
ipns pinning from other nodes is a planned feature afaik
<Kubuxu>
Problem is that it would have to change way that IPNS works.
<Kubuxu>
(Introduce IPRS and records w/o expiration date).
<The_8472>
you can still have expiration dates
<Kubuxu>
As currently IMHO IPNS has nothing in common with permanent web.
<The_8472>
pinning nodes would simply help with republishing whatever the latest entry is
<The_8472>
the original node could still replace them with newer ones
<The_8472>
or whoever holds the keypar
<The_8472>
*keypair
<patagonicus>
Yeah, IPNS pinning would solve other problems I have. :)
<Kubuxu>
But my node wanting to resolve it will throw those entries out as they are expired.
<patagonicus>
And what's this IPLD I keep hearing about? The repo's readme isn't really helpfull …
<The_8472>
well, expiration is orthogonal to 3rd party republishing
<Kubuxu>
expiration is included in the signed entry.
<The_8472>
yes, but within the expiration interval you can still republish
<Kubuxu>
patagonicus: it is a new way to create linked structures.
<The_8472>
think 6 months expiration... with 3rd party republish you don't have to stay online for those 6 months
<Kubuxu>
you wouldn't but current expiration is hard coded 24h.
<patagonicus>
Ok, thanks. :)
<The_8472>
yeah, that needs changing. just saying you still can have expiration
<Kubuxu>
patagonicus: it will allow for example to make simple and compact git-link graphs.
<Kubuxu>
Currently you would have to use files and so on are try to cheat it into existing system, IPLD will allow to do it much easier
<Kubuxu>
6 moths expiration is still not permanent web. I know that you can switch to for example resolved hash but ....
<The_8472>
IPNS is mutable. it'll never be permanent
<The_8472>
think of it as head-pointer into an append-only data structure
<Kubuxu>
I don't see why it can't be permanent. As yo said: it is head-pointer of append-only structure. This structure can be permanent so head pointer can be also.
<The_8472>
then you don't need the pointer in the first place
<The_8472>
you can just refer to the ipfs hash
<Kubuxu>
I need, to find new head.
<The_8472>
but if there can be a new one then it's mutable
<Kubuxu>
permanent in IPFS means that life of object can be extended beyond original author interest in it.
brianisnotarobot has joined #ipfs
<The_8472>
i thought we're talking about ipns?
<Kubuxu>
that also applied to IPNS
<Kubuxu>
(IPFS as whole project)
<Kubuxu>
s/applied/applies
<The_8472>
you could keep a log of old revisions of ipns records if you wanted, but someone looking for an ipns name wouldn't know that you were referring to an old instance. if you want to specifically refer to a particular instance you might as well refer to the ipfs hash it resolves to
<The_8472>
maybe my imagination is limited, but i see no particular use for keeping old ipns records when you can use the underlying ipfs path anyway
<drathir>
in theory ipfs have one bootstrap node added and its cjdns nodeoutside lan and have local lan cjdns ipfs node which one have pinned file the ipfs should detect that one and made a lan local connection?
<Kubuxu>
I say that in some cases old IPNS entry is better none.
<The_8472>
well, republishing gets you that
<drathir>
or that lan node needed to be added into bootstrap list directly?
<Kubuxu>
No you won't as it will expire.
<Kubuxu>
drathir: both are in cjdns?
<The_8472>
you could just ignore expiration
<Kubuxu>
Dangerous as in some cases expiration makes sense.
<The_8472>
those probably are not the cases where you want to look at old stuff?
<Kubuxu>
exacly
<drathir>
y correct both are connected to the same cjdns bootstrap node outside lan...
<The_8472>
so in those cases you can ignore it
<Kubuxu>
drathir: they will find each other, might be faster if you try requesting file available on the other node.
Tristitia has quit [Remote host closed the connection]
<drathir>
also lan nodes ofc are peered each other over vcjdns...
<Kubuxu>
The_8472: case where expiration makes sense: software updates, you don't want someone to launch delayed replay attack, by separating someone from network and showing him only old entries for given software.
<drathir>
k i will make a tests a little how better that workin...
<Kubuxu>
There is in works cjdns discovery also (using cjdns' dht to try peering with random nodes).
<The_8472>
Kubuxu, unless you want to pull an old version
<Kubuxu>
then you can explicitly search records history.
<drathir>
Kubuxu: ofc accessing at local gateway...
<Kubuxu>
but you don't want Eve filtering entries that Alice gets so she gets old (buggy) software.
<ipfsbot>
[js-ipfs-api] Dignifiedquire pushed 1 new commit to master: http://git.io/vzYya
<The_8472>
sure, but that's up to the client to verify. that doesn't keep anyone from republishing the records, even if they're expired
<Kubuxu>
For example: in case of block chain expiration might be 2x block time. In case of average site: infinite, even if original author stops caring, if someone cares to republish it would live(and URIs are not broken)
<daviddias>
Anticafe has nicer wifi, but this is not bad as well
<Kubuxu>
daviddias: you were in Kraków?
<daviddias>
never been there, why?
<Kubuxu>
ahh, someone else added place to your list there.
<ansuz>
daviddias: xwiki is pretty great for hacking
<ansuz>
come for the ansuz, stay for the babyfoot
<daviddias>
babyfoot?
<daviddias>
Kubuxu: list is shared, everyone can PR :)
moreati has joined #ipfs
<moreati>
AFAICT There's no Debian or Ubuntu packages for IPFS. Is that because no one's done it yet? The project doesn't want them (yet)? Other? (sorry if this is a dupe, wifi weirdness at my end)
<whyrusleeping>
moreati: we wouldnt mind a package in debian/ubuntu
<moreati>
I'll give it a go then :)
<whyrusleeping>
moreati: wonderful, thank you :)
ELFrederich_ has joined #ipfs
pfraze has joined #ipfs
ELFrederich_ is now known as ELFrederich
chriscool has quit [Quit: Leaving.]
zoobab has joined #ipfs
<zoobab>
hi
<zoobab>
working on an openwrt port
<zoobab>
so that we can have cheap IPFS devices with a simple NET2USB
<zoobab>
and you plug a hard drive at its back
jholden_ has joined #ipfs
jholden has quit [Ping timeout: 256 seconds]
the193rd has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to master: http://git.io/vzOgu
<ipfsbot>
[go-ipfs] whyrusleeping tagged v0.3.11 at master: http://git.io/vzOgV
<ipfsbot>
[go-ipfs] whyrusleeping merged master into release: http://git.io/vzOgH
<whyrusleeping>
choo choo!
<lgierth>
zoobab: i'm about to head out, but let me know if you need help or have questions about openwrt
<lgierth>
and be aware that ipfs requires go1.5, so gcc-go won't work
<lgierth>
the little mips routers are also not very good at crypto ;)
<lgierth>
(there are more powerful arm-based openwrt devices now of course)
<achin>
everybody on the release train!
elima has quit [Ping timeout: 255 seconds]
<lgierth>
You pay 5 euros per hour and everything on the menu is free.
<lgierth>
nice
<lgierth>
ok gotta run
<whyrusleeping>
dignifiedquire: heyo, you around?
<dignifiedquire>
whyrusleeping: I'm afraid so
jhulten has quit [Ping timeout: 260 seconds]
maxlath has quit [Ping timeout: 272 seconds]
brian has joined #ipfs
brian is now known as Guest34111
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
<dignifiedquire>
whyrusleeping: whats up?
<zoobab>
go on opewnrt seems to be not included in trun
<whyrusleeping>
dignifiedquire: anything i can do to help distributions along?
<zoobab>
trunk
Guest34111 is now known as otherbrian
<otherbrian>
hi, i have a question about the development process...
<whyrusleeping>
otherbrian: i probably have an answer
<otherbrian>
i noticed that the main project (e.g. go-ipfs) contains subprojects (e.g. go-libp2p) and the subprojects are often ahead
<otherbrian>
a) where should development happen?
<otherbrian>
b) when do these merge?
<otherbrian>
c) how do they merge?
<whyrusleeping>
otherbrian: that depends on what youre working on
<whyrusleeping>
generally, we do any work on the subproject that we want to change
<whyrusleeping>
and then those changes are tested and merged in that project
<dignifiedquire>
whyrusleeping: these two things from last night
<whyrusleeping>
then once those are in, we can do a pull request to update the dependencies into the main project
<dignifiedquire>
dignifiedquire> 2 small things for dists before I can publish a preview to ipfs with downloads working, the links in the architecture don't have the correct ending, now that everything is ziped
<dignifiedquire>
21:54 <dignifiedquire> and the other is that the "id" for go-ipfs is "ipfs" but it needs to be "go-ipfs
<whyrusleeping>
dignifiedquire: ah, cool. i'll do that right now
jandresen has joined #ipfs
<dignifiedquire>
whyrusleeping: make sure to pull before I just pushed some stuff I did last night
<dignifiedquire>
whyrusleeping: making the PR for the new webui now
<whyrusleeping>
dignifiedquire: having the id be 'go-ipfs' is mildly obnoxious...
<whyrusleeping>
i use the name of the binary as the id
<dignifiedquire>
hmm
<whyrusleeping>
i guess i could just use the name of the dist folder
<dignifiedquire>
but they should be different, and if we add js-ipfs what do you do then? :P
<whyrusleeping>
inferior-language-ipfs ;)
<dignifiedquire>
lol
<whyrusleeping>
slow-ipfs
<whyrusleeping>
single-threaded-ipfs
<dignifiedquire>
1 million module ipfs
<whyrusleeping>
javascripts are a great language, i'm sure that all will be fine
<whyrusleeping>
i'll change the tag to be go-ipfs
<dignifiedquire>
thanks :)
<dignifiedquire>
whyrusleeping: fs-repo-migrations doesn't like me
<whyrusleeping>
join the club
<dignifiedquire>
I have an 0.4 repo and try to run master
<whyrusleeping>
if you want to PR a new webui i can add that to 0.3.11 as well
<dignifiedquire>
whyrusleeping: working on the webui now
<dignifiedquire>
nearly done
<ipfsbot>
[js-ipfs] diasdavid created status/update-jan-12-2016 (+1 new commit): http://git.io/vzOHY
<ipfsbot>
js-ipfs/status/update-jan-12-2016 949d00c David Dias: update project roadmap, make it more explicit, help organize
<ipfsbot>
[js-ipfs] diasdavid opened pull request #47: update project roadmap, make it more explicit, help organize (master...status/update-jan-12-2016) http://git.io/vzOHc
<otherbrian>
@whyrusleeping so in the tcp reuseaddr/reuseport case, you are just connecting to a known peer endpoint (from bootstraping) and then continuing to use that open socket for everything else?
chriscool has joined #ipfs
Matoro_ has joined #ipfs
<whyrusleeping>
otherbrian: basically, yeah
Tristitia has joined #ipfs
Matoro_ has quit [Ping timeout: 240 seconds]
<otherbrian>
and instead of TURN servers, relaying is done through other peers?
<achin>
at the moment, i don't believe ipfs does any relaying
<otherbrian>
it's mentioned in the p2p spec but perhaps not implemented?
<achin>
right. that's my understanding
<richardlitt>
whyrusleeping: how do I update to 0.4.0, again?
<whyrusleeping>
richardlitt: you can use ipfs-update
<ipfsbot>
[webui] Dignifiedquire opened pull request #188: Fixes and release of a new version for 0.3.11 (master...0-3-11) http://git.io/vzOxr
<richardlitt>
Code and IPFSrepo seem the same, to me.
<whyrusleeping>
its the easiest way IMO
<richardlitt>
whyrusleeping: Should I be using v0.4.0-dev for building the API?
<whyrusleeping>
yes
<otherbrian>
thanks for the info guys, just starting to dig around. super interesting project.
<ipfsbot>
[go-ipfs] Dignifiedquire opened pull request #2188: feat: Update to the latest version of the webui (master...feat/webui-update) http://git.io/vzOpj
<whyrusleeping>
otherbrian: bear with us for a little bit, we're working on a transition for merging in a new version with lots of changes
<whyrusleeping>
so there might be a bit of mess during the process
<achin>
richardlitt: sorry, i don't think i properly understood your question. but if you're no longer confused, then no need to explain it to me :)
M-roblabla1 has joined #ipfs
<richardlitt>
achin: I'm confused by your answer, but not my question :)
<otherbrian>
will do. i've been day-dreaming about something sort of like this for a while so it's cool to find it and see that it's really happening. hope to find something i can contribute to after i find my way around.
<richardlitt>
whyrusleeping: works, thanks.
<achin>
if you're confused by my answer, that means i was confsued by your question! this is starting to feel like a monday morning
<dignifiedquire>
richardlitt: can you please test the new webui with 0.3.11 as well
<Thexa4>
Hi, I'm trying to use ipfs in an offline environment and can't seem to find documentation about caching of ipns records. Do I need to be able to reach the original node to resolve an ipns key?
zen|merge has quit [Read error: Connection reset by peer]
devbug has joined #ipfs
<otherbrian>
whyrusleeping: twitter says you are in seattle?
<ipfsbot>
[webui] Dignifiedquire closed pull request #185: Update i18next-browser-languagedetector to version 0.0.14
<whyrusleeping>
otherbrian: well, normally yes
<otherbrian>
whois says georgia
<otherbrian>
:P
<whyrusleeping>
je suis a paris
<whyrusleeping>
for now
<otherbrian>
whois is very wrong then!
<whyrusleeping>
quite!
<otherbrian>
well cool, i am in seattle
<achin>
whyrusleeping is a world-traveler
<whyrusleeping>
otherbrian: awesome, might have to meet up when i get back for some coffee
<whyrusleeping>
go get -u github.com/ipfs/fs-repo-migrations
<richardlitt>
heh, I have an alias
<richardlitt>
Cool. Thanks.
devbug has quit [Ping timeout: 272 seconds]
<whyrusleeping>
lol
chriscool has quit [Quit: Leaving.]
vijayee has joined #ipfs
<vijayee>
is there a node merkledag datastructure for ipfs implemented?
<whyrusleeping>
vijayee: i beleive so
<whyrusleeping>
daviddias: was working on that
rombou has joined #ipfs
<daviddias>
merkledag as in with protobufs -> WIP
<vijayee>
oh sweet, I started working on one from the go version
<vijayee>
but I thought I should ask before I roll out my reinvented wheel
<vijayee>
they see me rollin...
<vijayee>
daviddas: is it far a long? what is needed?
<tperson>
If you change the way the merkle tree is built that changes the hash, it would also change the way you destruct the underlying file too. Is that a versioned/named system?
<dignifiedquire>
daviddias: please test the new webui in the PR as well
<vijayee>
daviddias: its not too far started yesterday but I'm trying to mirror how the data is structured
<daviddias>
you can find a pretty much complete version of IPFS Repo here https://github.com/ipfs/js-ipfs-repo which has the block store (called datastore)
pfraze has joined #ipfs
<richardlitt>
dignifiedquire: sorry, was busy trying to get through that PR
<Ape>
Would it be possible to add refcounted direct pins?
<Ape>
E.g. ipfs pin add <hash1>; ipfs pin add <hash1> ipfs pin rm <hash1>; <hash1> still pinned because there were 2 adds, but only one rm
<Ape>
If multiple applications would pin the same hash, I'd like to keep it pinned until there are nothing requiring it
<Ape>
An alternative would be to attach a tag or reason to pins so that there may be multiple reasons listed to pin one hash
<Ape>
Then you could only remove one "reason": ipfs pin rm --tag "application1" <hash1>
<The_8472>
tags sound cleaner
<The_8472>
ref counting is tricky
<The_8472>
easy to get wrong
<Ape>
Yeah
M-david has quit [Quit: node-irc says goodbye]
<Ape>
With tags you could also list everything pinned by one application
<The_8472>
yesterday someone suggested reflink copies. without pinning ipfs could just keep symlinks to the copied data. if the application decides to delete it the symlink would be dead and ipfs could decide to GC it
<Ape>
Or clean up all pins for an application you are uninstalling
SuzieQueue has joined #ipfs
<The_8472>
so you could do without any pins at all. symlinks from the ipfs repo to wherever the application stores it would act as opportunistic pins
O47m341 has quit [Ping timeout: 272 seconds]
<Ape>
But would ipfs then need to scan all those symlinks all the time to see if they are deleted
<Ape>
Or if the data changes
<Ape>
Maybe it's enough to notice that when trying to actually read the data (e.g. when another peers requests it)
<The_8472>
for example, yeah
<The_8472>
or you could keep the symlink as pin + reflink copy as canonical storage. if the symlink is dead kill the reflink copy too
<The_8472>
filesystem based pinning
<Ape>
reflinking is kinda hard since it requires filesystem support and doesn't work when people have more than one filesystem
<The_8472>
it's an optimization, yes. without it you get duplicate storage
<The_8472>
but you currently have that anyway
<The_8472>
block storage + wherever the files are on disk
voxelot has joined #ipfs
<The_8472>
with symlinks only you can do away with that too at the cost of rechecking on read
arpu has quit [Remote host closed the connection]
<Ape>
reflinking wouldn't hurt but it adds complexity and requires time to implement and test
jgraef has quit [Ping timeout: 240 seconds]
<The_8472>
i think it's worth it if we're talking about terabytes of data
<Ape>
And it would only benefit users in rare cases
<The_8472>
rare cases such as someone ipfs add'ing their file collections?
<Ape>
If you want to store your terabyte collection to IPFS you should just delete the original files after adding
<Ape>
And replace them with symlinks to ipfs
<The_8472>
then would mean entirely entrusting your data to ipfs
<The_8472>
it's not something everyone wants to do
<whyrusleeping>
richardlitt: kyledrake wanna do a short post about 0.3.11?
<Ape>
True
<whyrusleeping>
basically just mention that its the last release before 0.4.0, and maybe elude to 0.4.0 just across the horizon
varav has joined #ipfs
jhulten has joined #ipfs
<The_8472>
i would feel more comfortable with having my data on zfs and letting ipfs either get its own CoW view on the data or symlinks.
<Ape>
Is 0.3.11 being released today?
<The_8472>
and not going through fuse also has performance benefits for samba&co
<Ape>
The_8472: That's a good point
moreati has quit [Quit: Leaving.]
<Ape>
Symlinking to ipfs instead of reflinking would make backuping very nice, tho.
moreati has joined #ipfs
<Ape>
Just backup the symlinks and it works automatically
<Ape>
(And remember to pin the files somewhere else)
<Thexa4>
Is it possible to sign a hash with my private key using the ipfs binary?
<The_8472>
that's the opposite of I want, but to each their own
<Ape>
Or perhaps just replace the whole filesystem with ipfs directories and only backup the root hash :)
<dignifiedquire>
whyrusleeping: though at the moment there is an issue that 0.4.0 is the latest, though it probably should be 0.3.11 for now?
<dignifiedquire>
whyrusleeping: pushed everything, I think we should merge for now
ashark has joined #ipfs
m0ns00n has quit [Quit: undefined]
<Kubuxu>
whyrusleeping: should I move my PRs onto master, I will have to recreate them (either way I have to rebase them because there was force-push).
m0ns00n has joined #ipfs
patcon has joined #ipfs
chriscool has joined #ipfs
disgusting_wall has joined #ipfs
r1k03 has quit [Quit: Leaving]
simonv3 has joined #ipfs
chriscool has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
chriscool has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] Kubuxu opened pull request #2191: Make dns resolve paths under _dnslink.example.com (master...feature/dnslink) http://git.io/vz3Qz
<ipfsbot>
[go-ipfs] Kubuxu closed pull request #2184: Make dns resolve paths under _dnslink.example.com (dev0.4.0...feature/dnslink) http://git.io/vzfGs
jaboja64 has joined #ipfs
Encrypt has joined #ipfs
chriscool has joined #ipfs
jaboja has quit [Ping timeout: 256 seconds]
<dyce>
oh so the hash is just a pointer, if the file changes the original hash still points to that file?
<Ape>
Maybe you could try wget. I don't see how ipfs alone would scrape that
<patagonicus>
NightRa: Don't think so. If you download a dev4.0 build that will contain some help (start with ipfs files --help), but that documentation is not complete.
<Kubuxu>
I was just asking if there is something ready.
<dignifiedquire>
whyrusleeping: if you can, yes please won't get to anything tonight anymore
<patagonicus>
I can see a few problems with the script, but nothing bad. Just don't pass any URLs with spaces (or tabs or newlines) and don't run the script in parallel.
<Kubuxu>
It does mirror just given page.
<NightRa>
patagonicus: Alright
<Kubuxu>
Already adding, thank Ape
<NightRa>
Let me ask then: what is the role of IPFS-Files?
<patagonicus>
Ah, right it's not recursive. I just kinda assumed that.
<Ape>
I added it already :D
elima has quit [Ping timeout: 240 seconds]
<Kubuxu>
Ape: the lua.ort?
O47m341 has quit [Ping timeout: 240 seconds]
<Ape>
Just see my ipfs.io link
<patagonicus>
IPFS files let's you modify files and directories in IPFS without using the fuse mount. I *think* that it's faster than the fuse version, but I'm not sure.
<Kubuxu>
Ahh, 0.4 is much quicker. :P
<Kubuxu>
IPFS Files/MFS It is mostly useful in case of creating directory structures and so on.
<Kubuxu>
at least IMHO
<Ape>
Wget even converted the links and image src so that everything should be usable with just ipfs
<ipfsbot>
[webui] Dignifiedquire closed pull request #193: Update i18next-browser-languagedetector to version 0.0.13
<patagonicus>
It's still slow when I try to mirror the Alpine repo, but that might just be the server. Not a lot of power and way too much running on it. :/
<Kubuxu>
Ape: also we broke encoding.
O47m341 has joined #ipfs
<Kubuxu>
Original site is encoded in windows-1252 and IPFS version is in utf-8.
<Kubuxu>
This is something that needs to be accounted for.
<Ape>
There is meta tag with charset=iso-8859-1 on the site html
<Ape>
And ipfs gateway isn't setting the encoding
<Ape>
I can't see why this doesn't work
<Ape>
Unless the site text isn't really iso-8859-1
<ipfsbot>
[go-ipfs] RichardLitt force-pushed feature/shutdown from 8e2c77c to 7e82f49: http://git.io/vui8j
<ipfsbot>
go-ipfs/feature/shutdown 7e82f49 Richard Littauer: Edited following @chriscool feedback...
<r4sp>
Hello...
fpibb has joined #ipfs
<Kubuxu>
It is for some reason FF renders it with UTF-8. Heh
<Ape>
We could convert the files to utf-8 after scraping
<r4sp>
I'm looking for an opensource project to work in my free time... I saw a few videos and this one seems to be really interesting... Is there another reason to convince me and choose this one? Dont get me wrong...
<Kubuxu>
Ape: I don't think so, it would make opening big files impossible.
<luigiplr>
Oh nice the webui is react
<achin>
richardlitt: if you are amenable to giving me contributor access to the weekly repo, i could push changes directly to the the feat/jan-12 branch
<otherbrian>
so the error you might run into then, being behind a shitty nat such that you aren’t even aware of it, is that someone else might try to connect to you via an entry in the DHT and fail
<Ape>
Kubuxu: It doesn't read the data when opening. Only evaluates the path and gives an fd
<otherbrian>
which could result in your DAGs not being published… ?
<richardlitt>
achin: Cool, will check. Not sure I have the power to do that.
<richardlitt>
achin: will ask who does.
<luigiplr>
Awe you guys are not full es6
<richardlitt>
achin: fixed issues. Thanks. Also, added awesome_bot to check links, think it's a smart move
<achin>
you must make a sacrifice to the merkedag gods
<dignifiedquire>
luigiplr: http://bit.ly/22YhE7S take a look, you need to use = for propTypes otherwise they won't be included
<luigiplr>
Thats...
<luigiplr>
odd
<luigiplr>
i normally go with =
<luigiplr>
oh
<luigiplr>
wait
<luigiplr>
i read you so wrong
<jgraef>
Hey, since python-ipfs-api is outdated and has a lot of bloat to support python2, I reimplemented it here: https://github.com/jgraef/python3-ipfs-api . The lowlevel API is almost finished, we have docs and examples, a high-level merkledag module and more high-level APIs coming soon. I'd appreciate some feedback :)
<Kubuxu>
You modify js so it is more usable, I don't like js so much that if I have anything bigger to do in js I go with ScalaJS.
Matoro has joined #ipfs
m0ns00n has joined #ipfs
<dignifiedquire>
jgraef: looking pretty nice :)
<ipfsbot>
[go-ipfs] RichardLitt force-pushed feature/shutdown from 7e82f49 to 48353a7: http://git.io/vui8j
<ipfsbot>
go-ipfs/feature/shutdown 48353a7 Richard Littauer: Edited following @chriscool feedback...
mec-is has joined #ipfs
voxelot has joined #ipfs
<whyrusleeping>
you guys better be happy with 0.4.0
<whyrusleeping>
i worked on that for like 10 months
<whyrusleeping>
thats almost a year
<dignifiedquire>
whyrusleeping: we will be :P
<dignifiedquire>
but just for a short while
<dignifiedquire>
we expect greater things to come
<achin>
like giant robot kitties
<luigiplr>
lol
danielrf has quit [Quit: WeeChat 1.3]
<dignifiedquire>
luigiplr: left some comments ;)
<luigiplr>
pfft
<luigiplr>
ill add some docs aswell for functions etc
<dignifiedquire>
uuhh docs :)
<luigiplr>
lol
<dignifiedquire>
luigiplr: also one more nitpick, please try to keep lines under 80 characters
* luigiplr
slaps dignifiedquire around with a sperm whale
<luigiplr>
but sure :P
* dignifiedquire
tries to not look to evil
* dignifiedquire
too
Matoro has quit [Read error: Connection reset by peer]
Matoro has joined #ipfs
<dignifiedquire>
luigiplr: I really appreciate it your work! so please don't take it the wrong way
<luigiplr>
haha np
<achin>
please think of the whales
<luigiplr>
ill do a final project wide indent change back to 2spaces at the end of the task
<luigiplr>
well app/scripts wide at least
<luigiplr>
:P
<dignifiedquire>
:D
chriscool has quit [Quit: Leaving.]
<dignifiedquire>
one more thing, no need to change the functional componets to class
<dignifiedquire>
or will react treat them the same if they don't inherit from Component?
<luigiplr>
it should treat them the same however you would do for icon
<luigiplr>
new icon(stuffs)
<luigiplr>
rather than
<luigiplr>
icon(stuffs)
ispeedtoo has quit [Quit: Page closed]
<dignifiedquire>
but I do <Icon />
<dignifiedquire>
those are stateless react components
<luigiplr>
O
<luigiplr>
oopes
<luigiplr>
i..
<luigiplr>
mmmmmmmm
<luigiplr>
best would be as a actual react class but..
<Kubuxu>
dignifiedquire: That is why I code js in ScalaJS. I mostly stopped codding in Java and I am using Scala instead and I can do the same in case of js.
<dignifiedquire>
luigiplr: by the way when you start thinking about adding alt, the individual pages need to be decoupled, as they will be broken into their own packages at some point
<ubuntu-mate>
i just want a way to pass a secure http or ftp file to an ipfs node directly
elima has quit [Ping timeout: 246 seconds]
<ubuntu-mate>
ideally i could run my node and then push to the bootstap nodes
jfis has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<Kubuxu>
Yup in 0.4 (today merged into master after release of 0.3.11). It allow you to create and modify directory/files structure operating only on hashes and names.
<Kubuxu>
MFS - mutable file system
devbug has joined #ipfs
<ubuntu-mate>
neat ;)
<ubuntu-mate>
um btw does benet have a offical YT channel?
<whyrusleeping>
note: until 0.4.0 is officially released, use the files api with a bit of care. there are some things i havent quite flushed out with regards to the `--flush` flag
<Kubuxu>
So you add files then: ipfs files mkdir /mysite; ipfs files cp /ipfs/Qm...AAA /mysite/20160113/; ipfs files cp /mysite/20160113/ /mysite/current