aschmahmann changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.7.0 and js-ipfs 0.50.2 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Conduct: https://git.io/vVBS0
rjknight_ has quit [Read error: Connection reset by peer]
rjknight_ has joined #ipfs
<continuouswave[m>
<dxiri "is there a way to make my files "> Encrypt them and send a password out of band?
<swedneck>
use a private network
<swedneck>
it's an experimental feature
pecastro has quit [Ping timeout: 260 seconds]
<continuouswave[m>
You can do a private swarm too, if that is an option
MDude has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<swedneck>
<continuouswave[m "You can do a private swarm too, "> that's the same as private network
gimzmoe has left #ipfs ["WeeChat 2.8"]
<continuouswave[m>
Oof, that message tok a long time to send
<swedneck>
<continuouswave[m "Oof, that message tok a long tim"> federation, woo!
Anth0mk has quit [Ping timeout: 258 seconds]
jess has quit [Quit: Leaving]
ib07 has quit [Ping timeout: 240 seconds]
ib07 has joined #ipfs
<stavros>
Anyone know how to keep IPFS from choking itself?
ib07 has quit [Max SendQ exceeded]
ib07 has joined #ipfs
<jadedctrl>
stavros: restaints might help
<jadedctrl>
*restraints
MDude has joined #ipfs
rennets has quit [Ping timeout: 240 seconds]
<stavros>
jadedctrl, what sort of restraints/
<JCaesar>
The [Service] MemoryMax=... kind?
<JCaesar>
Tell us how it's choking itself first, though.
<JCaesar>
(I noticed that badgerds on a 1GB system is not a good idea. but was able to get around it with a 4GB swap file...)
<dxiri>
swedneck private swarm seems like what I want, will read up on it! thanks!
catonano has quit [Ping timeout: 260 seconds]
zeden has quit [Quit: WeeChat 2.9]
grumpy_accountan has left #ipfs ["User left"]
jcea has quit [Ping timeout: 268 seconds]
caente has joined #ipfs
<JCaesar>
stavros: How is it choking?
zeden has joined #ipfs
Jeanne-Kamikaze has joined #ipfs
ipfs-stackbot1 has quit [Remote host closed the connection]
ipfs-stackbot1 has joined #ipfs
zootella has quit [Quit: Connection closed for inactivity]
Ecran10 has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<stavros>
I have a node that is configured with 10 GB of storage, GC every one hour and a high water mark of 90%, yet the 100 GB disk routinely fills up to the point where the IPFS daemon stops and can't start again because it can't write to its database. How can I limit data storage?
<stavros>
JCaesar
cnemo[m] has left #ipfs ["User left"]
swills has quit [Ping timeout: 240 seconds]
swills has joined #ipfs
kinky_ has joined #ipfs
myk_ has joined #ipfs
myk_ has quit [Client Quit]
stavros has quit [Remote host closed the connection]
myk_ has joined #ipfs
myk_ has quit [Read error: Connection reset by peer]
conifer has left #ipfs [#ipfs]
chachasmooth has quit [Ping timeout: 268 seconds]
chachasmooth has joined #ipfs
<JCaesar>
Hm. I assume you're pinning less then 90GB locally (note that publishing via IPNS or having in mfs (ipfs files) also constitutes pinning)?
<JCaesar>
(But yeah, I also have a node like that. It only has 40GB pinned, yet the store uses 80GB.
<JCaesar>
* (But yeah, I also have a node like that. It only has 40GB pinned, yet the store uses 80GB.)
jrt has quit [Killed (orwell.freenode.net (Nickname regained by services))]
jrt has joined #ipfs
chachasmooth has quit [Ping timeout: 246 seconds]
chachasmooth has joined #ipfs
zeden has quit [Quit: WeeChat 2.9]
LHLaurini has quit [Ping timeout: 240 seconds]
Mx8v has joined #ipfs
Mx8v has quit [Client Quit]
caente has quit [Ping timeout: 252 seconds]
ib07_ has joined #ipfs
ib07 has quit [Ping timeout: 272 seconds]
MDude has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<JCaesar>
Hm. I've also got an IPFS node that's choking itself, but with CPU usage. Has been spinning for more than a day now. Can I somehow get a list of running go routines to find out what's wrong?