<brianhoffman>
{"id":131619,"level":4,"message":"got error on dial to /ip4/10.255.0.177/tcp/9005/ws: \u003cpeer.ID bPu18Z\u003e --\u003e \u003cpeer.ID dSjypo\u003e dial attempt failed: websocket.Dial ws://10.255.0.177:9005: dial tcp 10.255.0.177:9005: socket: too many open files","module":"swarm2","time":"2017-08-28T09:06:26.558719851-04:00"}
jungly has joined #ipfs
<Kubuxu>
brianhoffman: what version of go-ipfs are you running?
<brianhoffman>
since there’s no way to adjust the file descriptor limit on iOS
<brianhoffman>
ummm hold on
<Kubuxu>
there slight fix for dialer recently
jaboja has quit [Ping timeout: 240 seconds]
<brianhoffman>
0.4.10
<brianhoffman>
which version added the dialer fix @kubuxu?
<brianhoffman>
0.4.11 is adding that?
<Kubuxu>
it is still in master, was not released yet
<Kubuxu>
I think it was merged ~2weeks ago
<DuClare>
Does this affect js-ipfs?
<DuClare>
Just wondering since orbit.chat had some issues running out o ffds
<Kubuxu>
js-ipfs is separate
<DuClare>
Ok
M-gdr has joined #ipfs
NullConstant has quit [Ping timeout: 246 seconds]
<TsT>
is there a command to check if a hash still exists ?
rcat has quit [Ping timeout: 240 seconds]
<voker57>
TsT: ipfs dht findprovs, if I understand correctly what do you mean by "exists"
<TsT>
if a file is still available, without a full download of it ;)
<voker57>
you can't check if it's _fully_ available without downloading most of it
<TsT>
hmm
rcat has joined #ipfs
<voker57>
well if it's using raw leaves you can probably skip the data parts of dag
<TsT>
what is the rule to understand how long a file is kept ?
<voker57>
but I don't think there's such command
<TsT>
kept/exists somewhere on ipfs ;)
<voker57>
file is kept as long as all its parts are kept by some node
<voker57>
and that depends on did they pin it, how often do they run gc, etc
<TsT>
then too be sure that a file still exists ... I should run my own node to deserve it ?
m0ns00n_ has joined #ipfs
<voker57>
yes. There's no implicit sharing of content in IPFS, your file is shared if somebody downloaded it and keeps seeding
<TsT>
voker57 ok ;)
<TsT>
voker57 my challenge is, on a web server, on a file download, release this file on ipfs and redirect the http-requestor on ipfs.io/ipfs/<thehash>
<voker57>
challenge?
<TsT>
technical challenge :)
<TsT>
to help a friend
Foxcool has quit [Remote host closed the connection]
neuthral has quit [Quit: neuthral]
erictapen has quit [Ping timeout: 255 seconds]
m0ns00n_ has quit [Quit: quit]
erictapen has joined #ipfs
jmill has joined #ipfs
erictapen has quit [Ping timeout: 260 seconds]
erictapen has joined #ipfs
detran has joined #ipfs
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
bwerthmann has joined #ipfs
erictapen has quit [Remote host closed the connection]
erictapen has joined #ipfs
f33d has joined #ipfs
ashark has joined #ipfs
jokoon has joined #ipfs
igorline has joined #ipfs
shizy has joined #ipfs
<brianhoffman>
so kubuxu i enabled mdns discovery and limited the ipns queries to 1 and ipfs is usable on go
<brianhoffman>
*ios
<brianhoffman>
not go
MDude has quit [Ping timeout: 240 seconds]
maxlath has joined #ipfs
PureTryOut[m] has joined #ipfs
<PureTryOut[m]>
hey guys I was wondering. the main page at https://ipfs.io mentions "Each network node stores only content it is interested in". this sounds nice, but doesn't that mean content which only a few people are interested in, might still get unaccessible in a few years time?
<PureTryOut[m]>
the people who are interested in that data might drop off the network over time, or just lose interest in the data and get rid of it
<PureTryOut[m]>
with only a few people hosting the data, this would make it unaccessible over time would it not?
<voker57>
PureTryOut[m]: correct
<voker57>
but once they rehost it, it will be available at the same link
<voker57>
(if they use exact same hash and dag builder, heh)
<PureTryOut[m]>
sure, but you need to explicitely download the data. if only a few people do that who eventually drop off or stop rehosting, it'll be gone. isn't it better to split a file up in lots of chunks and let random people on the network host bits of that file?
<voker57>
it's better for availability but has its own problems
<voker57>
some people don't want to seed illegal files for instance
<r0kk3rz>
PureTryOut[m]: you can set up consensus collectives like that if you want, but its not baked in
<PureTryOut[m]>
voker57: does that really matter if you only seed a tiny bit off it and make sure it's encrypted (so the seeder doesn't know what it's seeding)?
<voker57>
PureTryOut[m]: that doesn't save people from being jailed
<voker57>
people get problems for using tor exit nodes
<voker57>
for keeping *
<PureTryOut[m]>
well with Tor exit nodes whole illegal websites get loaded through it. that amount of illegal data wouldn't really happen if you just hosted bits of data
<Kubuxu>
brianhoffman: good to hear
<voker57>
are you a lawyer? are you a government? nothing's impossible if you have power of persecution
<voker57>
jail just one person for a bit of data and others are scared of running the software
NullConstant has joined #ipfs
<PureTryOut[m]>
I'm just afraid of scenarios like I had a while ago, where I tried to find some files of a project of like 10 years back. it had never been really popular other than in a specific community (no nothing illegal, just getting Linux to run on the Nintendo DS). I'm afraid that even with IPFS, such a project would still get unaccessible eventually
<r0kk3rz>
even without threat of persecution, do you want to host some unsavoury stuff, even without knowing?
<voker57>
it's hard to invent an algorithm that ensures your centuries-old data is forever stored in full
<r0kk3rz>
PureTryOut[m]: the benefit here is that people like the internet archive can host stuff *at the same location* rather than breaking links
<PureTryOut[m]>
of course I rather wouldn't, but if that'd mean content which I do want becomes accessible forever in a secure and private way, I think I'd go for it
<voker57>
then filecoin might help you
<voker57>
if when it gets done
<voker57>
you host illegal stuff -> receive filecoins -> pay for storage of data you like
<r0kk3rz>
but ultimately it must be stored somewhere, and so someone needs to care about it. storing everything forever sounds like a fantastic way to create bloat
<r0kk3rz>
like that guy who uploaded a petabyte of porn to AWS
<PureTryOut[m]>
and consider an application like Youtube on IPFS. I watch several videos per day, resulting in several tens in a week. I really do not want to rehost every file of such an application as I'd need an insane amount of storage
<voker57>
ipfs has gc for that
<PureTryOut[m]>
gc?
<voker57>
garbage collector
<r0kk3rz>
PureTryOut[m]: IPFS has a buffer, it garbage collects things when you reach the buffer limit
<PureTryOut[m]>
so basically it removes the file from your pc again if you reach a limit?
<r0kk3rz>
so maybe you only want to allocate 10gb of hdd space to IPFS, it will only use that much
<voker57>
and if you like the content, pin it
<PureTryOut[m]>
hmm I guess that's solved then, except for some really obscure videos again (data with low interest)
<miflow[m]>
the nice thing is you can use the api, to automate what to pin, so why not write a priority algorithm, with emphasis on availability
<r0kk3rz>
PureTryOut[m]: some torrent communities keep some really obscure content alive for so long. its a shame they keep getting shut down
<miflow[m]>
like, auto pinning obscure videos with a priority
* r0kk3rz
is still sad about what.cd
<PureTryOut[m]>
miflow: yeah I guess
Milijus has joined #ipfs
<miflow[m]>
or use filecoin to keep them alive
ulrichard has quit [Remote host closed the connection]
<voker57>
no program can distinguish obscure videos in need of archiving from 100TB of footage of drying paint
<voker57>
so it's always up to humans to preserve stuff, ipfs makes it easier
neuthral has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
f33d has quit [Read error: Connection reset by peer]
f33d has joined #ipfs
Milijus_ has joined #ipfs
Milijus has quit [Ping timeout: 246 seconds]
ylp has quit [Quit: Leaving.]
ilyaigpetrov has joined #ipfs
Foxcool has joined #ipfs
pat36 has quit [Read error: Connection reset by peer]
pat36 has joined #ipfs
igorline has quit [Ping timeout: 240 seconds]
m0ns00n has quit [Quit: quit]
pat36 has quit [Read error: Connection reset by peer]
Milijus_ has quit [Quit: Leaving]
Milijus has joined #ipfs
<Mateon1>
I wonder, will the DHT ever be exposed to applications, allowing to put arbitrary data into it at a given hash?
<Mateon1>
I can see so many applications for that
pat36 has joined #ipfs
NullConstant has quit [Ping timeout: 240 seconds]
<Mateon1>
Right now the best that can be done is pubsub + a network of always on nodes