dignifiedquire has quit [Quit: Connection closed for inactivity]
Qwertie has joined #ipfs
Qwertie has quit [Max SendQ exceeded]
<whyrusleeping>
ansuz: svalbard was warm
<whyrusleeping>
we had leisurely swims in the ocean
<whyrusleeping>
and group showers
<whyrusleeping>
and casual walks up mountains
devbug has joined #ipfs
Qwertie has joined #ipfs
Qwertie has quit [Max SendQ exceeded]
corruptednode has joined #ipfs
patcon has quit [Ping timeout: 264 seconds]
devbug has quit [Ping timeout: 240 seconds]
linton_s_dawson has joined #ipfs
simonv3 has quit [Quit: Connection closed for inactivity]
fiatjaf__ has joined #ipfs
fiatjaf_ has quit [Ping timeout: 265 seconds]
blarp1 has quit [Quit: Leaving.]
blarp has joined #ipfs
<brimstone>
are the Apps on IPFS videos available anywhere?
codeforkjeff has joined #ipfs
<whyrusleeping>
brimstone: you mean from the meetings?
<brimstone>
yes
<whyrusleeping>
ask richardlitt
<whyrusleeping>
its probably like 3am for him though
<whyrusleeping>
so i don't think he's up
<brimstone>
ok, i'll ask in 12 hours or so
<brimstone>
or maybe he'll see this in the morning and drop me a hash :)
<whyrusleeping>
haha, maybe
hellertime has joined #ipfs
reit has quit [Quit: Leaving]
reit has joined #ipfs
voxelot has joined #ipfs
tmg has joined #ipfs
voxelot has quit [Client Quit]
voxelot has joined #ipfs
voxelot has joined #ipfs
voxelot has quit [Changing host]
<jbenet>
codeforkjeff: python is so pretty :) always has been a pretty language.
<jbenet>
codeforkjeff: yeah thats a bug i beleive. it's filed in go-ipfs -- it's nasty and should be removed.
kaiza has quit [Ping timeout: 250 seconds]
kaiza has joined #ipfs
<codeforkjeff>
jbenet: thanks for the info! deltab also left a helpful comment on the gist explaining the issue further, I have a better grasp of what's going on now.
<codeforkjeff>
i'm guessing it's probably the json encoder in go-ipfs that's mistakenly decoding the bytes to latin-1 for some weird reason?
jhulten has joined #ipfs
jhulten has quit [Ping timeout: 240 seconds]
jhulten has joined #ipfs
fiatjaf__ has quit [Ping timeout: 240 seconds]
computerfreak has quit [Quit: Leaving.]
pfraze has quit [Remote host closed the connection]
ygrek has quit [Ping timeout: 256 seconds]
fiatjaf_ has joined #ipfs
Not_ has quit [Quit: Leaving]
<whyrusleeping>
jbenet: is the pretty part the old python2 syntax for print?
<davidar>
whyrusleeping (IRC): ew
<whyrusleeping>
speaking of pretty
<davidar>
print>>file, "wtf?"
<whyrusleeping>
i had some new keys for my keyboard waiting for me when i got back home
<davidar>
Sounds ominous...
<whyrusleeping>
if my internet were happy i would add a picture...
<davidar>
whyrusleeping (IRC): just ascii art it ;)
<davidar>
I'm probably going to regret asking this, but why are you replacing the keys on your keyboard?
nonaTure has quit [Quit: Leaving.]
wiedi has quit [Ping timeout: 240 seconds]
<whyrusleeping>
the old keys were too tall, and they had an ugly font
fiatjaf_ has quit [Read error: Connection reset by peer]
<whyrusleeping>
and i have an addiction to buying things on massdrop please help
<davidar>
whyrusleeping: move to australia, the shipping costs are a pretty good disincentive :p
<davidar>
"wtf, why does shipping cost three times as much as the thing I'm actually trying to buy?"
<whyrusleeping>
but then i would struggle to purchase enough food on a daily basis
<whyrusleeping>
or find coffee
<whyrusleeping>
or buy steam games
<whyrusleeping>
i'd end up living with the bogans
<davidar>
lol
<davidar>
"I'd have so much extra cash if I didn't have to eat and shelter" :p
<whyrusleeping>
but really!
<whyrusleeping>
being homeless and working in coffee shops would be so cheap
<whyrusleeping>
i'd be able to buy a house
<davidar>
pfssht, nobody can afford to buy a house in this market
<davidar>
they only people buying houses are people who have just sold a house
<whyrusleeping>
sometimes
<davidar>
I don't understand how supply and demand works in housing markets
<davidar>
everyone needs a home, and yet somehow there's always fewer houses than people, so everyone has to out-bid each other to ridiculous levels...
<davidar>
I don't get it, it seems like a pretty fundamental thing that any advanced society should have solved
<davidar>
anywho, /rant
codeforkjeff has quit [Quit: Page closed]
ygrek has joined #ipfs
<whyrusleeping>
lol
<whyrusleeping>
yeah...
<whyrusleeping>
i'm actually looking for a home right now
<whyrusleeping>
scrolling through craigslist
pfraze has joined #ipfs
pfraze has quit [Remote host closed the connection]
<jbenet>
whyrusleeping lets move to Berlin :)
<whyrusleeping>
meh, berlin sucked
<whyrusleeping>
copenhagen, paris, oslo, or lisbon were great though
<whyrusleeping>
and longyearbyen was awesome
<jbenet>
prob didn't stay in the right Berlin places :)
corruptednode has quit [Ping timeout: 276 seconds]
flounders has quit [Ping timeout: 250 seconds]
Tv` has quit [Quit: Connection closed for inactivity]
elima_ has joined #ipfs
pfraze has quit [Remote host closed the connection]
hashcore has joined #ipfs
pfraze has joined #ipfs
rendar has joined #ipfs
ygrek has quit [Ping timeout: 240 seconds]
s_kunk has quit [Ping timeout: 240 seconds]
mildred has joined #ipfs
zz_r04r is now known as r04r
ylp1 has joined #ipfs
elima_ has quit [Ping timeout: 272 seconds]
dignifiedquire has joined #ipfs
elima_ has joined #ipfs
hashcore has quit [Ping timeout: 245 seconds]
disgusting_wall has quit [Quit: Connection closed for inactivity]
flounders has joined #ipfs
chriscool has joined #ipfs
s_kunk has joined #ipfs
IlanGodik has quit [Quit: Connection closed for inactivity]
The_8472 has quit [Ping timeout: 264 seconds]
The_8472 has joined #ipfs
zaggynl has quit [Quit: reboot]
pfraze has quit [Remote host closed the connection]
nonaTure has joined #ipfs
m0ns00nfup has joined #ipfs
m0ns00nfup has quit [Client Quit]
m0ns00nfup has joined #ipfs
m0ns00nfup has quit [Quit: undefined]
zorglub27 has joined #ipfs
hashcore has joined #ipfs
_marvin_ has quit [Ping timeout: 260 seconds]
Encrypt has joined #ipfs
_marvin_ has joined #ipfs
wiedi has joined #ipfs
fiatjaf_ has joined #ipfs
hashcore has quit [Ping timeout: 265 seconds]
false_chicken has joined #ipfs
mildred has quit [Ping timeout: 272 seconds]
<false_chicken>
Hey. I posted a question on the Github repo but its not really an issue and I posted it before I found the IRC. Maybe someone can just answer here and I can close it? https://github.com/ipfs/ipfs/issues/154
JasonWoof has quit [Read error: Connection reset by peer]
vijayee has joined #ipfs
vijayee has quit [Client Quit]
<Kubuxu>
false_chicken: Currently you need to connect to peer directly to fetch block from it
<dignifiedquire>
daviddias: are you around?
<daviddias>
I am :)
mildred has joined #ipfs
<false_chicken>
Ah. So currently nodes that do not have the data do not pass any data down the line?
<dignifiedquire>
I'm trying to run the tests on idb-blob-store
<dignifiedquire>
(also I have no idea what scat is)
<daviddias>
script cat
<daviddias>
it loads js code into a browser
<Kubuxu>
Yes but relaying is planned (someone said it is even in codebase but just disabled) and in future bitswap (the p2p network) will be relaying AFAIK, there will be some things done to cover some ...
<daviddias>
it is a quick way to test js code in the browser
<dignifiedquire>
hmm probably should add scat and browserify to devdeps then in that pr
<jbenet>
Hey everyone, i'll looking for an RA (research assistant). If you would like to work with me on finding papers, cleaning up protocols, writing papers, etc. (pub/sub will be our first direction) please ping me. Familiarity with p2p and distributed systems helps, but not necessary. Interest in those _is_ necessary :)
<daviddias>
I agree, but not everyone has the same opinion as me and I didn't want to impose since it isn't my repo, I've seen substack depending on browserify as a global thing
<daviddias>
jbenet: o/ me me me me :D
<dignifiedquire>
I see,
<false_chicken>
Hmm... Interesting. We are wondering how the legal ramifications of this will be sorted. Because transmitting illegal content in many countries is well... illegal.
<Kubuxu>
false_chicken: In bitswap is meant to be cooperative so to get better QoS from some node your node will try to provide it with some blocks. It is all AFAIK.
<false_chicken>
So if your node happens to be in some ones way and pass some blocks to them they could be implicated. I see. Thanks for the input.
<jbenet>
daviddias: <3 you're already counted as part of it :]
<daviddias>
:D :D
<Kubuxu>
This is the problem of when the content is illegal, is 50% of content illegal, is 1% of it still illegal, is 0.01% (which might be part of other content) illegal.
<jbenet>
false_chicken i responded to your issue on github
<jbenet>
look at the faq
<false_chicken>
Ah... Thanks!
<Kubuxu>
(also ISPs and other providers cover it somehow).
<false_chicken>
jbenet This is awesome tech. When I saw the demo my mind exploded with the possibilities! I set up a few nodes and played with it and I am so excited. Thats why I wanted these legal questions out of the way lol.
<daviddias>
dignifiedquire: you mean, running idb tests?
<dignifiedquire>
running idb through the abstract-blob-store tests
<daviddias>
dignifiedquire: blob-store tests are not very intensive, as they only check for correctness of the interface implementation
<daviddias>
running js-ipfs-repo tests with idb-blob-store will reveal the racing conditions
<fr34kyn01535>
I mean, tor allowes me to deny being an exit route, i know tor is different, but in this specific system, if my neighbour were to download child porn, my whole neighbourhood would be busted
<fr34kyn01535>
or atleast the node that passes the file to him...
<false_chicken>
Very valid concerns xD.
<xicombd_>
dignifiedquire: you can try adding `require('events').EventEmitter.prototype._maxListeners = 100` to avoid that warning. The failing tests should be either direct keys not found, or timeouts because of lock problems related to that as well
<dignifiedquire>
xicombd_: okay will see if I can figure out what's happening
<xicombd_>
thanks!
fr34kyn01535 has quit [Ping timeout: 252 seconds]
hashcore has joined #ipfs
joshbuddy has quit [Quit: joshbuddy]
<dignifiedquire>
daviddias: do you have time for a quick call?
<daviddias>
dignifiedquire: sure
<dignifiedquire>
daviddias: too slow ;) already figured out my problem
<daviddias>
ahah
nonaTure has quit [Quit: Leaving.]
IlanGodik has joined #ipfs
hashcore has quit [Read error: Connection reset by peer]
<patagonicus>
I've commented in https://github.com/ipfs/go-ipfs/issues/2245 that it would be useful if the go-ipfs docker image allowed changing the UID/GID of the user ipfs is running as so that you can prevent random users on the system from being able to read/modify ipfs' repo. I could probably make the necessary changes to the Dockerfile and container_daemon, but I'd like to know that is something that would be merged
<patagonicus>
first. Can anyone comment on this?
<brimstone>
richardlitt: are recordings of the Apps on IPFS meetings available anywhere?
raxrb has joined #ipfs
libscott_ is now known as libscott
<raxrb>
how can I know about the total number of objects in local repo ?
disgusting_wall has joined #ipfs
<lgierth>
raxrb: ipfs refs local
<raxrb>
so total number of hashes represent the total objects
<Kubuxu>
raxrb: objects != files if that is what you mean
<raxrb>
Kubuxu: i want to know total number of objects
<raxrb>
Kubuxu: When we say objects in ipfs, we mean a location
<raxrb>
??
JasonWoof has joined #ipfs
JasonWoof has quit [Changing host]
JasonWoof has joined #ipfs
<raxrb>
file
<raxrb>
+ directory is equal to total objects
<Kubuxu>
Might be but when IPLD comes in it will be far from true.
<Kubuxu>
Ipfs refs local show number of blocks, smallest building units.
<raxrb>
so where is the information stored regarding which blocks belongs to which objects
<dignifiedquire>
daviddias: xicombd_ I pretty sure I figured out the issue, now trying to fix
<patagonicus>
raxrb: In other blocks. The hash you get for a large file is a block that references other blocks.
<raxrb>
patagonicus: so I need to count the top level blocks
<patagonicus>
Well, depends on what you mean by top level. If you have a directory it will also link to other files and directories, which in turn can link to more files and directories, so you want to count those. You could start with all pinned hashes and recurse into directories and then make sure that you don't count files more than once if they are in more than one directory.
<dignifiedquire>
FUUUUCK
<patagonicus>
Something on fire?
<dignifiedquire>
node streams suck...
<dignifiedquire>
they are so brittle
<dignifiedquire>
it's unbelievable
<Kubuxu>
dignifiedquire: I've recently pushed 3GiB through node streams into DB but it might depend on application.
hellertime has joined #ipfs
<dignifiedquire>
Kubuxu: it's not that they don't perform, it's just if you implement them and pipe things together there are so many subtle things that can go wrong, it's just wooooot
<Kubuxu>
Yup,
<Kubuxu>
this is unfortunate but I've seen some nice implementation wrappers for streams. They might make things better.
<dignifiedquire>
daviddias: now I could use that call
tmg has quit [Ping timeout: 245 seconds]
<dignifiedquire>
daaaaavid
aar- has quit [Remote host closed the connection]
<daviddias>
dignifiedquire: I'm coming :) was busy getting my butt kicked for not training for more than one month :)
<dignifiedquire>
daviddias: :D:D:D
<dignifiedquire>
and I'm just sitting here fixing bugs ;)
Oatmeal has quit [Read error: Connection reset by peer]
corvinux has joined #ipfs
corvinux has quit [Read error: Connection reset by peer]
corvinux has joined #ipfs
Tv` has joined #ipfs
pfraze has quit [Remote host closed the connection]
chriscool has quit [Ping timeout: 272 seconds]
zorglub27 has joined #ipfs
<grncdr>
benoliver999: you can definitely "unpin" which doesn't actively remove the local data but allows it to be removed by normal cache expiration...
<grncdr>
not sure if that covers your needs
<benoliver999>
Yeah it does
<grncdr>
so `ipfs pin rm`
<benoliver999>
Also, I'm struggling to run ipfs mount
corvinux has quit [Ping timeout: 272 seconds]
<benoliver999>
both /ipfs and /ipns exist, and are owned by me
linton_s_dawson has joined #ipfs
linton_s_dawson has left #ipfs [#ipfs]
corvinux has joined #ipfs
<Kubuxu>
benoliver999: do you have FUSE userspace installed
<Kubuxu>
?
chriscool has joined #ipfs
<benoliver999>
Good wuestion
<Kubuxu>
ls /etc/fuse.conf -l
<benoliver999>
It's there
<Kubuxu>
show me permissions of /etc/fuse.conf and /dev/fuse
<benoliver999>
All root - does that need to change? Sorry for the noob questions..
<Kubuxu>
I am more interested in permissions themselves (not owners), and groups.
<benoliver999>
/dev/fuse is crw-rw-rw-
<benoliver999>
fuse.conf -rw-r--r--
pfraze has joined #ipfs
<M-mubot>
fuse.conf -rw-r--r has -1 points
<Kubuxu>
M-mubot: yup
<Kubuxu>
benoliver999: are you only user of the system?
<benoliver999>
Yeah
<Kubuxu>
M-mubot: on the other hand my work with -rw-r--r--. Weird
water_resistant has joined #ipfs
<Kubuxu>
can you also do ls -l /ipfs and show me permissions groups and owenrs
<benoliver999>
drwxr-xr-x 2 ben root 4096
m0ns00nfup has quit [Quit: undefined]
corvinux has quit [Quit: IRC for Sailfish 0.9]
<Kubuxu>
what is the exact message from ipfs mount?
<benoliver999>
Error: fuse failed to access mountpoint /ipfs
<benoliver999>
ERROR core/comma: error mounting: fusermount: exit status 1 fusermount: exit status 1 mount_unix.go:219 <--- From the daemon
ylp1 has quit [Quit: Leaving.]
patcon has joined #ipfs
<Kubuxu>
I had something similar, trying to remember the cause.
<Kubuxu>
do `lsmod | grep fuse` and if you `ls -l /dev/fuse` there should be two numbers after group and owner
<Kubuxu>
also cat /etc/fuse.conf
<benoliver999>
OK, nothing in lsmod... and /dev/fuse is 10, 229
mildred has quit [Ping timeout: 264 seconds]
<Kubuxu>
benoliver999: then `modprobe fuse`
<benoliver999>
Huh, not found
<Kubuxu>
from root?
<benoliver999>
Yeah
<benoliver999>
I will reboot the machine
<benoliver999>
I can't remember if there was a kernel upgrade
<benoliver999>
It's on arch so probably yes
<Kubuxu>
I am on arch and it works. I have the module and it works. Weird
<benoliver999>
I don't know if it's the same thing, but when I do an upgrade and the kernel gets upgraded, if I don't reboot I can't mount certain things
<Kubuxu>
Weird, then try rebooting.
Oatmeal has joined #ipfs
pfraze has quit [Remote host closed the connection]
<benoliver999>
Bingo
<Kubuxu>
did it work?
<benoliver999>
heh well
<benoliver999>
IPFS mounted at: /ipfs
<benoliver999>
IPNS mounted at: /ipns
<benoliver999>
Now it's permission denied
<Kubuxu>
You can't ls /ipfs
<Kubuxu>
it is normal
<benoliver999>
Ah it's just me be an eejit
<Kubuxu>
I had the same, woo effect
<kpcyrd>
benoliver999: modprobe fails after a kernel upgrade
gaboose has joined #ipfs
<kpcyrd>
because the modules for the running kernel are gone
<kpcyrd>
you can try to load all modules you need after booting
<benoliver999>
Yeah, it's all up and running now
<benoliver999>
I always forget - when mount fails, reboot. Includes this...
<Kubuxu>
kpcyrd: ahh, forgot about that
patcon has quit [Read error: Connection reset by peer]
<benoliver999>
So what am I actually looking at in /ipns? Stuff that I added with ipfd add <file>?
<Kubuxu>
In /ipfs you can access everything there is in the network
<Kubuxu>
in in /ipns
<Kubuxu>
you have /ipns/local/ which is your local directory
<Kubuxu>
you can just edit files there, copy in, move, rearrange and it will automagically update published hash.
<benoliver999>
Oh shit I get it
<benoliver999>
Holy crap this is amazing
<Kubuxu>
you can do cd /ipns/ipfs.io/ to see directory of ipfs.io website.
<benoliver999>
Yeah
<Kubuxu>
or it doesn't work
<Kubuxu>
with ipfs.io
<Kubuxu>
looks like full resolving isn't there
<benoliver999>
Yeah it's only hashes i presume?
<Kubuxu>
Yup
<benoliver999>
SO
<Kubuxu>
But you can see my website: ls /ipns/QmdKbeXoXnMbPDfLsAFPGZDJ41bQuRNKALQSydJ66k1FfH/
<benoliver999>
The best way to preserve file names is to have the hash be a directory, and have the files inside?
<Kubuxu>
Yes
jhulten has joined #ipfs
voxelot has joined #ipfs
<benoliver999>
And now, just in the act of doing ls /ipns/haswerfwwefetc.. - I have a copy of the data that I in turn upload to peers?
<Kubuxu>
Yes, but only small fraction of it (as you didn't access any files only thing you have is directory structure)
<benoliver999>
Ah I see
<Kubuxu>
I have directory ~/uploaded where I put things that I want to store in IPFS
<Kubuxu>
Download 1add.sh and you can make your own like that.
<benoliver999>
So it adds then symlinks stuff?
m0ns00nfup has joined #ipfs
<Kubuxu>
Yup, just back your data before adding (it might be buggy).
<benoliver999>
Of courser
<Kubuxu>
I found out that it is best way to store hashes.
<Kubuxu>
I have my data accessible, hashes are there and adding and saving the hash is automated.
<benoliver999>
I you change or remove data, the hash stays the same?
<benoliver999>
From within the directory, I mean?
<Kubuxu>
It is read-only
<Kubuxu>
you can't change the data.
<benoliver999>
So if you change the site, it gets re-published at another hash
<benoliver999>
Sorry, a site, I mean, for example
<Kubuxu>
If I change something I need to republish it.
<kpcyrd>
Kubuxu: do you know if it automatically unpins old files so I can just gc them?
<Kubuxu>
kpcyrd: the IPNS mount?
<kpcyrd>
still looking for a simple solution for my mirror project
<Kubuxu>
no that is why it sucks
<ipfsbot>
[js-ipfs] diasdavid pushed 1 new commit to master: https://git.io/vzMa4
<ipfsbot>
js-ipfs/master 93e08c4 David Dias: Update README.md
<Kubuxu>
depends on how big it is but old structure doesn't take much.
<Kubuxu>
s/it/the project/
<Kubuxu>
I plan to store all of my site versions when I update to 04x
patcon has joined #ipfs
<kpcyrd>
Kubuxu: I think the disk would be full after some months
<benoliver999>
So let's say I want to push out a newsletter every week. The best way to do it would be to just make a hash per-file and send it out each time. If I had a dir full of them that was updated all the time, it would get a new hash anyway with every update?
<Kubuxu>
How big are your changes.
<kpcyrd>
Kubuxu: archlinux/
<Kubuxu>
mirror?
<kpcyrd>
yes
<Kubuxu>
benoliver999: use ipscend for sites (it is rough but works)
<Kubuxu>
kpcyrd: how bit is arch mirror (I wonder how long the hashing would take).
simonv3 has joined #ipfs
<kpcyrd>
Kubuxu: not that long, but there was a bug in `ipfs add` that leaked memory last time I checked
reit has quit [Quit: Leaving]
pfraze has joined #ipfs
<kpcyrd>
had to create a huge swap file so the server stays alive
<kpcyrd>
might be fixed now, tho
patcon has quit [Read error: Connection reset by peer]
patcon has joined #ipfs
adamcurry has joined #ipfs
<Kubuxu>
kpcyrd: 04 fixes a lot of things about adding.
computerfreak has quit [Ping timeout: 240 seconds]
Oatmeal has quit [Ping timeout: 240 seconds]
<kpcyrd>
I'll try again soon
<kpcyrd>
soon-ish
<whyrusleeping>
yeah, thats fixed on master
<Kubuxu>
it also adds files API allowing for simple creation of directory structures inside IPFS
computerfreak has joined #ipfs
simonv3 has quit [Ping timeout: 240 seconds]
simonv3 has joined #ipfs
m0ns00nfup has quit [Quit: undefined]
<Kubuxu>
kpcyrd: so you could do something like: add changed file, move it to correct directory inside mirror structure, remove direct pin of the file, remove old file from IPFS directory structure
m0ns00nfup has joined #ipfs
<Kubuxu>
s/IPFS/mirror
<kpcyrd>
I'm currently *) resolve *) add folder *) unpin old hash *) publish new hash
<kpcyrd>
but my script for that is fugly
<kpcyrd>
if it fails it builds up garbage that is never removed
<ion>
If you had a version of the IPFS daemon with automatic garbage collection, that would have a race condition. :-)
<kpcyrd>
I'd be glad to replace it with proper tooling for large mirrors :)
<ion>
Just pin the new hash before unpinning the old one.
<whyrusleeping>
i'm gonna add a pin command to ipfs files
<whyrusleeping>
so you can do 'ipfs files pin /path/in/files/land'
<kpcyrd>
ion: that doesn't solve all issues I could think of
<whyrusleeping>
and it will automatically manage unpinning the old thing, and repinning the new thing as things are changed
patcon has quit [Read error: Connection reset by peer]
<voxelot>
whyrusleeping: w00t
<Kubuxu>
will be there fuse mount for files API?
<whyrusleeping>
eventually, yes
<whyrusleeping>
i'm going to break the fuse code into a separate binary first though
<whyrusleeping>
(or maybe i should start by making the files api stuff a separate binary first?)
<kpcyrd>
whyrusleeping: if that works for directories it could be a solution for my issue
<Kubuxu>
also files mv with unpinning the source would be nice (although not that require)
<whyrusleeping>
kpcyrd: yep! it would work for directories
<whyrusleeping>
mv with unpinning the source?
<Kubuxu>
yup
<whyrusleeping>
elaborate? (coffee still kicking in)
<Kubuxu>
I add file, get /ipfs/QmAAAA which is pinned
<whyrusleeping>
but it would be the equivalent of an `ipfs add` and `ipfs files cp`
<whyrusleeping>
plus!
<whyrusleeping>
it has the added benefit of making the semantics very clear
<whyrusleeping>
we are making a 'copy' of the data
simonv3_ has joined #ipfs
<Kubuxu>
The additional step of adding file IPFS before using it in files took me a moment.
<kpcyrd>
whyrusleeping: maybe add a command for `ipfs unpin --all-except Qmxyz` that clears out everything except the preset and the hash (and children of the hash) I've specified
m0ns00nfup has quit [Quit: undefined]
m0ns00nfup has joined #ipfs
<Kubuxu>
I have command/alias for this :p, lemme dig it up
<kpcyrd>
ok, nice. I was recalling something about discovery by joining known swarms, was probably wrong
<kpcyrd>
anyway, gotta run, I'll check the backlog
<dignifiedquire>
daviddias: I don't know what to call my store
<dignifiedquire>
idb-blob-store is taken
<dignifiedquire>
:/
<Kubuxu>
It had a problem as I was loosing bootstraps on my IPv6 only node (no longer IPv6 only).
<Kubuxu>
That was the solution.
<dignifiedquire>
and indexeddb-blob-store is horrible
trn has quit [Quit: quit]
HastaJun has quit [Ping timeout: 260 seconds]
rabble has quit [Remote host closed the connection]
adamc1999 has quit [Read error: Connection reset by peer]
Encrypt has quit [Ping timeout: 260 seconds]
<cryptix>
we had that initially
Oatmeal has joined #ipfs
adamc1999 has joined #ipfs
Encrypt has joined #ipfs
Akaibu has quit [Quit: Connection closed for inactivity]
<ion>
whyrusleeping: Are there plans to implement pinning on top of the files API, i.e. the other way around? Perhaps a special root level directory under which everything is kept pinned. Applications could hold subdirectories under it.
<whyrusleeping>
possibly, yeah
<Kubuxu>
aren't everything in files API pinned?
trn has joined #ipfs
<ion>
Good point. I'm not sure I want anything placed into files to disappear by itself.
<daviddias>
dignifiedquire: was on a call
<dignifiedquire>
daviddias: now worries I called idb-plus-blob-store :D
Akaibu has joined #ipfs
<daviddias>
dignifiedquire: I would for full name
<daviddias>
indexed-db-blob-store
<daviddias>
ahah
<daviddias>
not plus plus?
<daviddias>
:P
<dignifiedquire>
I was tempted to do idb-++-blob-store
<daviddias>
dignifiedquire: again, ping substack and ask if we wants to merge/replace, no need to have a broken one available
<dignifiedquire>
daviddias: will do when it's actually working
<dignifiedquire>
ogd: it will be as that one is borken
<dignifiedquire>
*broken
pfraze has quit [Remote host closed the connection]
parkan has joined #ipfs
water_resistant has quit [Ping timeout: 240 seconds]
voxelot has quit [Ping timeout: 250 seconds]
pfraze has joined #ipfs
voxelot has joined #ipfs
water_resistant has joined #ipfs
<gaboose>
Kubuxu: things in mounted /ipfs/ aren't necessarily pinned by you. could be by anyone in the network.. i think
<ogd>
dignifiedquire: hmm all the tests passed for me just now
<Kubuxu>
Gaboose: we were talking about files API that is coming in 04
<dignifiedquire>
ogd: yeah it's a nasty case not covered by the test cases, it's an issue with the writestream returned from createWriteStream. If you call .end('data', cb) on it the cb/finish event will be called before anything was written to the db
<dignifiedquire>
there are multiple races in the way the different streams are combined that lead up to that
<ogd>
dignifiedquire: oh has someone written a failing test for this yet?
<dignifiedquire>
I have failing tests locally but haven't pushed them yet
nonaTure has joined #ipfs
<gaboose>
Kubuxu: oh, ok.. does it have specs? i'd love to take a look :)
<dignifiedquire>
(took me most of today to actually figure out what was going wrong)
<dignifiedquire>
let me clean that up and send a PR
<whyrusleeping>
'oh look, you can make your own separate docker networks'
<whyrusleeping>
but you cant define routing between those nets
computerfreak has quit [Read error: Connection reset by peer]
<whyrusleeping>
and you cant change the gateway of those nets
<whyrusleeping>
(or at least, if you can, its not documented)
<gaboose>
there's always --net=host
<gaboose>
makes everyone share the same network interfaces
<gaboose>
worked for me in the past for simple cases
<gaboose>
ofc, i have no idea what you're working on...
<Kubuxu>
whyrusleeping: use LXC+btrfs, then you have 0 cost 0 diskspace containers/almost VMs
<whyrusleeping>
hmmm
<whyrusleeping>
Gaboose: yeah, i'm working on more complicated things than that
<whyrusleeping>
i'm trying to model NAT traversal
<whyrusleeping>
which means i need roughly three networks
<whyrusleeping>
one 'internet' type top level network, and two 'lan' networks each with a gateway node running thats in both the lan network and the 'public' network
adamcurry has quit [Ping timeout: 240 seconds]
<gaboose>
neat
<whyrusleeping>
yeah... hopefully
<whyrusleeping>
the one thing i'm really good at in networking is using the 'ip' command
<whyrusleeping>
i'm hoping i can do everything i need with just that
<Kubuxu>
example network config: lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up lxc.network.ipv4 = 10.2.2.1 lxc.network.ipv4.gateway = 10.254.0.1
<noffle>
hm. I've been having what I presume to be nat problems suddenly, as of yesterday. I wish we had that ipfs-traceroute-like tool you talked about yesterday in go-ipfs sync, whyrusleeping. any other debug steps I might take?
<noffle>
are we using relays?
<ogd>
dignifiedquire: is it specific to .end(buff) or does it happen if you e.g. .write(buff); .end() separately?
<Kubuxu>
whyrusleeping: also raw network namespaces are also an option
<dignifiedquire>
ogd: that's not deterministic, because it's a race between the write and the read
<dignifiedquire>
ogd: but the finish event might be emitted before the write is actually persisted
<noffle>
according to 'ipfs id' I've punched through (/ip4/24.6.143.43/tcp/61934/ipfs/QmafMiHWHvVYPgZ84ZxctGsQy49PnpzJ6G7FvsVH7c8gJJ, which I can netcat into from the outside)
<ogd>
dignifiedquire: ah right i guess its pretty implementation dependent on what the behavior is
<gaboose>
<whyrusleeping> and you cant change the gateway of those nets
<ogd>
noffle: hahaha cool
<gaboose>
whyrusleeping: so you want one of your docker containers to immitate a router?
<gaboose>
:o
<dignifiedquire>
ogd: that was my first thought but the pendibg count is only used for the callback of create
<dignifiedquire>
writestream not for the stream itself as the flush in through is not capable of async
* dignifiedquire
on mobile please excuse typos
<whyrusleeping>
Gaboose: precisely!
<whyrusleeping>
can't even change the routes on them from the container
<whyrusleeping>
have to do it externally via netns exec
adamcurry has joined #ipfs
Stskeeps has joined #ipfs
ylp has joined #ipfs
<gaboose>
whyrusleeping: if you mean this error "RTNETLINK answers: Operation not permitted" on ip route change default via...
<whyrusleeping>
yeah, but thats just because namespaces drop priveleges
<whyrusleeping>
so they cant touch their own networking
<whyrusleeping>
which is annoying
<gaboose>
docker run --cap-add=NET_ADMIN
<whyrusleeping>
>.>
<gaboose>
yeah :) or docker run --privileged
<gaboose>
this one solves all kinds of problems
adamcurry has quit [Ping timeout: 264 seconds]
<patagonicus>
ipfs add is too slow. I'm getting 2MB/s here, that's 7h to add a minimal Alpine mirror (50GB, edge + current stable) or 42h for a Gentoo mirror (300GB). :/
<patagonicus>
The machine isn't very fast, but both sha512sum and split'ing into 256k files get about ten times that speed.
<Kubuxu>
patagonicus: are you using 03 or 04?
<patagonicus>
That was with 0.4
<jbenet>
That's odd you should get much better throughout. The bottleneck should be I/O or hashing, both should be bigger -- whyrusleeping: o/
<whyrusleeping>
patagonicus: in your ~/.ipfs/config
<noffle>
jbenet: I'm not clear on some of the mounting semantics. while /ipns is mounted, are changes supposed to be published live as I make them?
<whyrusleeping>
noffle: they get bubbled up and coalesced
<whyrusleeping>
the ipns entry gets republished either every (i think 5?) seconds, or when no operations are performed for 500ms
<whyrusleeping>
or something along those lines, i don't remember the exact numbers
<noffle>
whyrusleeping: hm. I can't seem to repro that. my changes never seem to go live.
<whyrusleeping>
noffle: what are you doing to test?
<noffle>
whyrusleeping: adding a file to my /ipns/my-key dir, then doing 'ipfs ls /ipns/my-key' locally
Not_ has joined #ipfs
<whyrusleeping>
huh
<noffle>
I also can't hit my ipns entry via public gateway, though regular ipfs stuff I publish works
<whyrusleeping>
>.>
<whyrusleeping>
weird
<patagonicus>
whyrusleeping: doesn't help. Maybe slightly faster, but for that I'd have to run this test a couple of times to see the change. (Also, it just shows +Inf% and the ETA is negative and counting down)
<whyrusleeping>
patagonicus: did you restart your daemon?
<patagonicus>
Yeah. I'm using docker and reformatted the ipfs repo fs, started the image, changed the config, restarted the daemon and then added the same data as before.
<noffle>
hm. I don't see my node publishing its pubkey to the dht when I 'ipfs name publish'
<noffle>
might explain why I can only resolve locally
m0ns00nfup has joined #ipfs
<ipfsbot>
[js-ipfs-api] Dignifiedquire created greenkeeper-mocha-2.4.1 (+1 new commit): https://git.io/vzDOA
<ipfsbot>
js-ipfs-api/greenkeeper-mocha-2.4.1 a879ad3 greenkeeperio-bot: chore(package): update mocha to version 2.4.1...
M-edrex has quit [Quit: node-irc says goodbye]
<ipfsbot>
[js-ipfs-api] Dignifiedquire deleted greenkeeper-mocha-2.4.1 at a879ad3: https://git.io/vzD36
hashcore has joined #ipfs
jfis has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
rendar has quit [Ping timeout: 265 seconds]
Encrypt has quit [Quit: Quitte]
adamcurry has joined #ipfs
jfis has joined #ipfs
rendar has joined #ipfs
hashcore has quit [Ping timeout: 240 seconds]
adamcurry has quit [Ping timeout: 272 seconds]
<Monokles>
hey what is ipld meant to abbreviate? ip linked data?
<jbenet>
whyrusleeping: don't tell people to set nosync to true without warning them of what it means for safety
<jbenet>
Yes
<Monokles>
ahh .. it's not mentioned anywhere in ipld.md :)
<jbenet>
Haha whoops comment on that PR pls
voxelot has quit [Ping timeout: 240 seconds]
<jbenet>
patagonicus: on a crash during an add with nosync, you may lose in-flight added data. An object mid-write be left incomplete on disk.
adamcurry has joined #ipfs
<jbenet>
We should have ipfs repo fsck check all hashes. --repair attempts to redownload them
ike_ has joined #ipfs
<noffle>
jbenet: +1
intercommonage has quit [Remote host closed the connection]
boxxa has quit [Quit: Connection closed for inactivity]
charley has joined #ipfs
livvy has joined #ipfs
charley_ has joined #ipfs
<livvy>
Hey all, what's the best practices for using ipfs on a laptop. Is it designed to be run as a daemon at boot / a service / just a process? Sorry if this sounds obvious, maybe I'm missing something =)
fiatjaf_ has quit [Ping timeout: 256 seconds]
computerfreak has joined #ipfs
charley has quit [Ping timeout: 260 seconds]
Encrypt has joined #ipfs
<livvy>
Nm
Peer3Peer has joined #ipfs
Peer3Peer has quit [Client Quit]
voxelot has joined #ipfs
fiatjaf_ has joined #ipfs
bjp3 has joined #ipfs
livvy has quit [Ping timeout: 276 seconds]
fiatjaf_ has quit [Remote host closed the connection]
livvy has joined #ipfs
simonv3 has quit [Quit: Connection closed for inactivity]
livvy has quit [Read error: Connection reset by peer]
pfraze has quit [Remote host closed the connection]
charley has quit []
gordonb has quit [Quit: gordonb]
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to feat/mfs-flush-cmd: https://git.io/vzDD1
<ipfsbot>
go-ipfs/feat/mfs-flush-cmd d4df862 Jeromy: more proper file sync logic...
<dignifiedquire>
daviddias:
<dignifiedquire>
daviddias: I just implemented the full store, but now I realized node streams are after 2 years of discussions still not able to prevent the finish event for cleanup
edubai__ has quit [Quit: Connection closed for inactivity]
<dignifiedquire>
a whole day wasted..
<dignifiedquire>
daviddias: don't use node streams for js-ipfs, use anything, but not this madness.
Trogones has quit [Ping timeout: 272 seconds]
b0at has joined #ipfs
simonv3 has joined #ipfs
pfraze has joined #ipfs
<whyrusleeping>
dignifiedquire: at least you learned something
<whyrusleeping>
all i've learned today is 'wow, what the fuck are the docker devs doing?'
<whyrusleeping>
as far as i can tell, docker is an overly complicated wrapper around lxc (now libcontainer) and network namespaces
<whyrusleeping>
and i can do all of dockers network napespacing in like, 30 lines of bash
<noffle>
dignifiedquire: what's the context?
<dignifiedquire>
whyrusleeping: didn't I tell you that before :P
<whyrusleeping>
dignifiedquire: you probably did, but i wanted to beleive that the docker devs weren't incompetently making shit way more confusing than it needs to be
yellowsir1 has quit [Quit: Leaving.]
<dignifiedquire>
noffle: trying to implement resource handling with node streams for database writes and utterly failing, because the underlying design prevents me from doing it
<dignifiedquire>
(also there are subtle bugs due to that which resulted in daviddias thinking indexeddb was inconsistent)
adamcurry has joined #ipfs
devbug has quit [Ping timeout: 272 seconds]
notdaniel has quit [Quit: Bye!]
joshbuddy has quit [Quit: joshbuddy]
adamcurry has quit [Ping timeout: 265 seconds]
jhulten has joined #ipfs
<voxelot>
yeah we really want to use indexeddb to avoid that 5mb domain limit in local storage right (given they say its a hack to combine domains for more storage)
montagsoup has joined #ipfs
<alu>
voxelot Im moving to DTLA soon
<alu>
:3
<alu>
lets snack sometime
<voxelot>
alu: nice!
<voxelot>
downtown is legit, working in the valley is not =/
<voxelot>
let's kick it soon... also if you ever need a job let me know lol
<voxelot>
i hate driving from valley to dtla everyday, take about 1.5 hours so i'm convincing them to let me work from home
<Sleep_Walker>
sorry if it sounds too crazy but have you considered that license should be mandatory part of metadata?
joshbuddy has joined #ipfs
elima_ has quit [Ping timeout: 260 seconds]
devbug has joined #ipfs
m0ntagsoup has joined #ipfs
ashark has quit [Ping timeout: 240 seconds]
montagsoup has quit [Ping timeout: 240 seconds]
joshbuddy has quit [Quit: joshbuddy]
Not_ has quit [Ping timeout: 264 seconds]
IlanGodik has quit [Quit: Connection closed for inactivity]
<ipfsbot>
[go-ipfs] whyrusleeping created fix/0.3.11-changelog (+1 new commit): https://git.io/vzytw
<ipfsbot>
go-ipfs/fix/0.3.11-changelog 2b0680a Jeromy: finally add changelog for v0.3.11...
parkan has quit [Ping timeout: 240 seconds]
jedahan has joined #ipfs
jedahan has quit [Client Quit]
ianopolous has joined #ipfs
tlevine has quit [Ping timeout: 245 seconds]
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #2248: finally add changelog for v0.3.11 (master...fix/0.3.11-changelog) https://git.io/vzym8
montagsoup has joined #ipfs
adamcurry has joined #ipfs
joshbuddy has joined #ipfs
joshbuddy has quit [Remote host closed the connection]
joshbuddy has joined #ipfs
adamcurry has quit [Ping timeout: 276 seconds]
computerfreak has quit [Quit: Leaving.]
Oatmeal has quit [Ping timeout: 264 seconds]
Not_ has joined #ipfs
jfis has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
Matoro has joined #ipfs
r04r is now known as zz_r04r
<daviddias>
dignifiedquire: Node.js streams are just a elegant interface for IO( even if it took several iterations to get to it ), if we don't use streams, we will have to have some other way of buffered reading and writing that offers back pressure and pausing
<daviddias>
Bugs are common, we can fix those, the obstacles with moving lots of that will be always present and better use wide adopted solutions and understand how we can adapt them to our use case unless we have the capacity and knowledge in a better way to do the same things
<dignifiedquire>
the issue is that the underlying design is not allowing us to do safe writes
<dignifiedquire>
because we have no gurantees about flushes to the store being finished
<daviddias>
We can shim and only callback when a read returns truthy
<dignifiedquire>
so for me that makes them a broken tool to do the output part of io
<daviddias>
Also, we are not the first of moving lots of data in the browser, we can ping Feross and learn from his experience for example, maybe he even has some notes somewhere
<lgierth>
(there is no such thing as a safe write)
<lgierth>
:)
<dignifiedquire>
we can do our own things on top bit as soon as we integrate with other modules or core the same unsafeness applies
<lgierth>
fix the flush guarantee in nodejs streams, and the next write cache is in the os
neurosis12 has quit [Remote host closed the connection]
<dignifiedquire>
lgierth: there are certain gurantees I want when doing transactional writes..
<dignifiedquire>
When writing to the db things are better
<lgierth>
i'm probably not the right person to weigh in on this anyhow, given my enormous experience with nodejs
<dignifiedquire>
that's fine I'm just frustrated :/
<lgierth>
yeah i feel ye
<dignifiedquire>
daviddias: rx streams might be an alternative or pull streams, not sure yet, will also investigate the stream spec for html, sth has to be there that works..
<dignifiedquire>
and yes would love to here something about this from feross and mafintosh
<whyrusleeping>
whats a 'truthy' ?
<whyrusleeping>
sounds like a euphemism for a lie
<ianopolous>
lol
laterad has joined #ipfs
shamal has joined #ipfs
<lgierth>
and that's how truthy/falsey works in js
<dignifiedquire>
there is only truthy and lie :D
<whyrusleeping>
we should change it to 'fact' and 'lie'
<whyrusleeping>
make javascript great again
<dignifiedquire>
someone has been brushing up their trump language skilzz
<lgierth>
PSA: ipfs.io now resolves both v03x *and* v04x -- the fastest successful (status < 400) response wins
<lgierth>
voxelot: ^
<voxelot>
woot!
<whyrusleeping>
:D
<ianopolous>
on a related topic I've been wondering about transactions in IPFS relative to gc. Is there a way to guarantee nothing is gc'd during this process: write a bunch of objects, ending with one which forms a new merkle root which is then pinned?
<whyrusleeping>
i'm going to tag a release canidate for 0.4.0
<whyrusleeping>
ianopolous: not currently...
<whyrusleeping>
:(
<lgierth>
whyrusleeping: rc? don't we have open bugs?
<whyrusleeping>
just don't run a gc
<whyrusleeping>
we should probably expose something like 'ipfs repo gc-lock'
<lgierth>
rc is "good to go but we wanna make double sure"
<ianopolous>
can't a gc happen at any time?
<whyrusleeping>
lgierth: release canidate
<whyrusleeping>
yeah
Oatmeal has joined #ipfs
<whyrusleeping>
ianopolous: no, currently they only happen when you tell them to