lgierth changed the topic of #ipfs to: go-ipfs v0.4.8 is out! https://dist.ipfs.io/#go-ipfs | Week 13: Web browsers, IPFS Cluster, Orbit -- https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/vVBS0
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
onabreak has quit [Quit: Page closed]
onabreak has joined #ipfs
imvr_ has quit [Quit: Leaving]
rcat has quit [Quit: leaving]
ygrek_ has joined #ipfs
neurrowcat has joined #ipfs
talonz has quit [Remote host closed the connection]
asyncsec has quit [Quit: asyncsec]
archpc has quit [Quit: Alt-F4 at console]
asyncsec has joined #ipfs
dimitarvp has quit [Read error: Connection reset by peer]
dimitarvp has joined #ipfs
dimitarvp has quit [Quit: Bye]
yellowbelly[m] has joined #ipfs
ofdm has quit [Ping timeout: 240 seconds]
ofdm has joined #ipfs
jmill has joined #ipfs
archpc has joined #ipfs
john3 has quit [Ping timeout: 255 seconds]
Aranjedeath has quit [Quit: Three sheets to the wind]
vgeds has joined #ipfs
talonz has joined #ipfs
vgeds has quit [Client Quit]
skeuomorf has joined #ipfs
reit has joined #ipfs
solariiknight[m] has joined #ipfs
JayCarpenter has quit [Quit: Page closed]
bedeho has joined #ipfs
gmcabrita has quit [Quit: Connection closed for inactivity]
AkhILman has joined #ipfs
jmill has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
robattila256 has joined #ipfs
jmill has joined #ipfs
x43[m] has joined #ipfs
chris613 has quit [Quit: Leaving.]
infinity0 has joined #ipfs
R-Yo[m] has joined #ipfs
_whitelogger has joined #ipfs
sein has joined #ipfs
sein is now known as Guest42129
Guest42129 has quit [Client Quit]
robattila256 has quit [Quit: WeeChat 1.7.1]
archpc has quit [Ping timeout: 268 seconds]
archpc has joined #ipfs
asyncsec has quit [Quit: asyncsec]
bedeho has quit []
palkeo has quit [Ping timeout: 260 seconds]
chungy has quit [Quit: ZNC - http://znc.in]
athan has quit [Remote host closed the connection]
athan has joined #ipfs
archpc has quit [Read error: Connection reset by peer]
archpc has joined #ipfs
ZaZ has joined #ipfs
robattila256 has joined #ipfs
<Bat`O> whyrusleeping: I see that pubsub use the self key, does it make sense to be able to choose the key to use to publish something ? I guess that until encryption is there, from the recipient side there is no garantee that a specific key was used ?
bedeho has joined #ipfs
daviddias has joined #ipfs
Mitar has quit [Ping timeout: 240 seconds]
ralphtheninja has joined #ipfs
Caterpillar has joined #ipfs
matoro has quit [Quit: WeeChat 1.7.1]
Caterpillar has quit [Client Quit]
matoro has joined #ipfs
espadrine has joined #ipfs
Caterpillar has joined #ipfs
Mitar has joined #ipfs
Foxcool has joined #ipfs
<dsal> Bat`O: The --key option lets you specify the key.
<dsal> Not sure what you mean by the second part. It's rather obvious a specific key was used.
Foxcool has quit [Quit: http://foxcool.ru]
corvinux has joined #ipfs
corvinux has quit [Ping timeout: 240 seconds]
<Bat`O> dsal: as far as I can tell, there is no --key option for ipfs pubsub pub
<dsal> Oh, pubsub. I thought you were talking about ipns
ZaZ has quit [Read error: Connection reset by peer]
rendar has joined #ipfs
ecloud has quit [Ping timeout: 240 seconds]
ecloud has joined #ipfs
mildred1 has joined #ipfs
mildred4 has joined #ipfs
espadrine has quit [Ping timeout: 258 seconds]
maxlath has joined #ipfs
john3 has joined #ipfs
ianopolous has joined #ipfs
espadrine` has joined #ipfs
jrahmy_ has joined #ipfs
Soulweaver has joined #ipfs
jungly has joined #ipfs
maxlath has quit [Ping timeout: 268 seconds]
cxl000 has joined #ipfs
fjl__ has quit [Quit: bye]
fjl has joined #ipfs
reit has quit [Ping timeout: 268 seconds]
maxlath has joined #ipfs
jrahmy_ has quit [Ping timeout: 260 seconds]
rcat has joined #ipfs
elkalamar has quit [Ping timeout: 260 seconds]
xelra has quit [Ping timeout: 240 seconds]
xelra has joined #ipfs
maxlath has quit [Remote host closed the connection]
maxlath has joined #ipfs
robattila256 has quit [Quit: WeeChat 1.7.1]
robattila256 has joined #ipfs
gmcabrita has joined #ipfs
kvda has joined #ipfs
skeuomorf has quit [Ping timeout: 260 seconds]
angreifer has joined #ipfs
gastrolith has joined #ipfs
reit has joined #ipfs
mildred4 has quit [Read error: No route to host]
tinsu has joined #ipfs
mildred4 has joined #ipfs
tiago has quit [Changing host]
tiago has joined #ipfs
jkilpatr has joined #ipfs
<tinsu> is there a plan to establish a tool where i can report a certain hash / file. for example a private embarassing picture somebody else added to the ipfs network. like in facebook where i can report abuse.
tilgovi has joined #ipfs
<r0kk3rz> tinsu: what would the expected outcome be? once information is out there its usually impossible to really remove it
<tinsu> excatly! but thats the advantage of ipfs. if i could block an hash it would be impossible to distribute such a file through ipfs
<tinsu> alright if somebody already got it and put it on his harddisk somewhere...
<tinsu> then its useless
<tinsu> but the distributen would be much harder
robattila256 has quit [Quit: WeeChat 1.7.1]
<tinsu> if one day 80% of the web is running on ipfs (fingers crossed) this would be huge to avoid abuse and protect peobles privacy
<Atrus[m]> But showing the hash kinda advertises it. All I need to do to get a list of "bad files" is to download the blacklist.
hpk has joined #ipfs
<tinsu> and this blacklist is open to be downloaded?
<tinsu> so there is no intention to actively block certain files?
<tinsu> all files are free to be added
<tinsu> and can never be removed again
btmsn has joined #ipfs
btmsn has quit [Client Quit]
<yangwao> tinsu: who will be authority of abuse reporting?
kaotisk has quit [Read error: Connection reset by peer]
<yangwao> tinsu: I think it's up to you to implement some kind of blacklist of hashes
<kythyria[m]> Also, a widespread blacklist has fun implications. Like being abusable itself
<kythyria[m]> If someone gets a hash of an object you want to publish into the blacklist, you can't publish it either.
btmsn has joined #ipfs
<tinsu> yes the ca question... thats a hard nut. who can be trusted.
hpk has left #ipfs ["WeeChat 0.4.2"]
skeuomorf has joined #ipfs
<tinsu> it can't be a state, it needs to be some sort of algorithm... an unstoppable programm
<tinsu> im fantasizing :)
btmsn has quit [Client Quit]
tilgovi has quit [Ping timeout: 268 seconds]
<tinsu> but i guess we are a few steps away from such algorithms
btmsn has joined #ipfs
<yangwao> And who can be trusted?
<yangwao> and why you dont embrace local authority?
<tinsu> local authority?
<Stskeeps> fwiw the bad files blacklist is meant to be double hashed afaik
<Stskeeps> i.e. you hash the requested hash and if it's on the blacklist, well, yeah
<Stskeeps> so you can't derive the original hash from the blacklist
<Stskeeps> that easily
MrSparkle has joined #ipfs
<Atrus[m]> Oh that's right! I completely forgot about that
robattila256 has joined #ipfs
dimitarvp has joined #ipfs
<tinsu> thanks for the explenation :)
<r0kk3rz> id rather it be handled something like SPAM blacklists are, you can curate one if you want and other applications can take notice of it
<r0kk3rz> but IPFS is open source, and im sure someone will remove the blacklist code if it was baked in somehow
tinsu has quit [Ping timeout: 260 seconds]
jleon has joined #ipfs
jleon has quit [Client Quit]
john3 has quit [Ping timeout: 268 seconds]
maxlath has quit [Ping timeout: 260 seconds]
asyncsec has joined #ipfs
maxlath has joined #ipfs
Soulweaver has quit [Remote host closed the connection]
john3 has joined #ipfs
skeuomorf has quit [Ping timeout: 255 seconds]
bedeho has quit [Remote host closed the connection]
bedeho has joined #ipfs
<daviddias> !pin QmNrqBDBo5x25GM9tEHF7p8TcYcFWpyTe1CpaeYCEiJJ64 libp2p-mapper-base
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmNrqBDBo5x25GM9tEHF7p8TcYcFWpyTe1CpaeYCEiJJ64
bedeho has quit [Ping timeout: 260 seconds]
bedeho has joined #ipfs
robattila256 has quit [Quit: WeeChat 1.7.1]
<daviddias> !pin Qmb19dkr5uMN8Hi13SBifeZwbAVYBYSE41nzDusUZg6MHY libp2p-mapper-base-v1
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/Qmb19dkr5uMN8Hi13SBifeZwbAVYBYSE41nzDusUZg6MHY
<musicmatze> daviddias: pinned.
<daviddias> !botsnack
<sprint-helper> om nom nom
<daviddias> sprint-helper: it was for pinbot
<sprint-helper> Error: Unrecognized command!
<sprint-helper> Correct usage: sprint-helper: announce <args> | next | now | tomorrow | help
bedeho has quit [Remote host closed the connection]
bedeho has joined #ipfs
tilgovi has joined #ipfs
bedeho has quit [Remote host closed the connection]
bedeho has joined #ipfs
bedeho has quit [Remote host closed the connection]
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
bedeho has joined #ipfs
bedeho has quit [Remote host closed the connection]
neurrowcat has quit [Quit: Deebidappidoodah!]
shizukesa has joined #ipfs
tilgovi has quit [Ping timeout: 240 seconds]
Soft has quit [Ping timeout: 240 seconds]
reit has quit [Ping timeout: 268 seconds]
ashark has joined #ipfs
citizenErased has joined #ipfs
elasticdog has joined #ipfs
maxlath has quit [Ping timeout: 246 seconds]
elasticdog has quit [Changing host]
elasticdog has joined #ipfs
maxlath has joined #ipfs
john3 has quit [Ping timeout: 246 seconds]
john3 has joined #ipfs
ikpunk[m] has left #ipfs ["User left"]
anewuser has joined #ipfs
droman has joined #ipfs
citizenErased has quit [Ping timeout: 240 seconds]
jaboja has joined #ipfs
anewuser has quit [Ping timeout: 240 seconds]
anewuser has joined #ipfs
palkeo has joined #ipfs
bedeho has joined #ipfs
ShalokShalom has joined #ipfs
Falconix_ has quit [Ping timeout: 240 seconds]
Falconix has joined #ipfs
<sprint-helper> The next event "IPFS All Hands Call" is in 15 minutes.
brendyyn has quit [Ping timeout: 240 seconds]
palkeo has quit [Ping timeout: 240 seconds]
brendyyn has joined #ipfs
<flyingzumwalt> sprint-helper: "IPFS All Hands Call" 452 https://hackmd.io/MYEwRgDA7AjGCGBaeYAsTUFYzEQDigFNDEQAzTMqATghlTAGYwg= no-stream\
<sprint-helper> Correct usage: sprint-helper: announce <args> | next | now | tomorrow | help
<sprint-helper> Error: Unrecognized command!
<flyingzumwalt> sprint-helper announce now "IPFS All Hands Call" 452 https://hackmd.io/MYEwRgDA7AjGCGBaeYAsTUFYzEQDigFNDEQAzTMqATghlTAGYwg= no-stream\
<sprint-helper> Error: Not all args are valid.
<sprint-helper> Correct usage: sprint-helper: announce <topic name> <sprint issue> <notes> <zoom> <stream url or message>
JayCarpenter has joined #ipfs
<flyingzumwalt> the instructions in the sprint-helper readme are incorrect.
<whyrusleeping> wheres the link?
<sprint-helper> Error: Unrecognized command!
<sprint-helper> Correct usage: sprint-helper: announce <args> | next | now | tomorrow | help
<flyingzumwalt> *%^)*(**$!
<whyrusleeping> good enough
<sprint-helper> Join Call: https://zoom.us/j/779-351-365
<sprint-helper> Watch Stream: no-stream
<sprint-helper> ================================================================================
<sprint-helper> Sprint Issue: https://github.com/ipfs/pm/issues/452
<sprint-helper> ========================= IPFS Sprint: IPFS All Hands Call =========================
<sprint-helper> Topic: IPFS All Hands Call
ygrek_ has quit [Ping timeout: 268 seconds]
ShalokShalom has quit [Remote host closed the connection]
rory has joined #ipfs
ShalokShalom has joined #ipfs
ShalokShalom has quit [Read error: Connection reset by peer]
rendar has quit [Ping timeout: 240 seconds]
bedeho has quit [Remote host closed the connection]
ShalokShalom has joined #ipfs
jaboja has quit [Ping timeout: 240 seconds]
jaboja has joined #ipfs
kevina has quit [Quit: Leaving]
anewuser has quit [Ping timeout: 246 seconds]
webdev007 has joined #ipfs
btmsn has quit [Remote host closed the connection]
bedeho has joined #ipfs
ckwaldon has quit [Ping timeout: 260 seconds]
ckwaldon has joined #ipfs
brothers has joined #ipfs
btmsn has joined #ipfs
anewuser has joined #ipfs
kevina has joined #ipfs
rendar has joined #ipfs
ckwaldon has quit [Quit: ckwaldon]
bedeho has quit [Remote host closed the connection]
asyncsec has quit [Remote host closed the connection]
vgeds has joined #ipfs
dgrisham has joined #ipfs
asyncsec has joined #ipfs
spossiba has quit [Quit: Lost terminal]
spossiba has joined #ipfs
vgeds has quit [Read error: Connection reset by peer]
Encrypt has joined #ipfs
jmill has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
brendyyn has quit [Ping timeout: 272 seconds]
marten_ has joined #ipfs
bedeho has joined #ipfs
robattila256 has joined #ipfs
aedigix has quit [Remote host closed the connection]
aedigix has joined #ipfs
aedigix has quit [Remote host closed the connection]
aedigix has joined #ipfs
marten_ has quit [Quit: Textual IRC Client: www.textualapp.com]
john3 has quit [Ping timeout: 246 seconds]
galois_d_ has joined #ipfs
galois_dmz has quit [Ping timeout: 272 seconds]
JayCarpenter has quit [Quit: Page closed]
jaboja has quit [Ping timeout: 246 seconds]
skeuomorf has joined #ipfs
rory has left #ipfs [#ipfs]
robattila256 has quit [Quit: WeeChat 1.7.1]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
cellvia has joined #ipfs
cellvia has left #ipfs [#ipfs]
ShalokShalom_ has joined #ipfs
ShalokShalom_ has quit [Read error: Connection reset by peer]
ShalokShalom has quit [Ping timeout: 246 seconds]
ShalokShalom_ has joined #ipfs
ShalokShalom_ has quit [Read error: Connection reset by peer]
galois_d_ has quit [Remote host closed the connection]
ShalokShalom has joined #ipfs
bedeho has quit [Remote host closed the connection]
steefmin has quit [Ping timeout: 240 seconds]
galois_dmz has joined #ipfs
galois_dmz has quit [Client Quit]
steefmin has joined #ipfs
galois_dmz has joined #ipfs
jmill has joined #ipfs
jmill has quit [Client Quit]
galois_d_ has joined #ipfs
brendyyn has joined #ipfs
anewuser has quit [Ping timeout: 268 seconds]
galois_dmz has quit [Ping timeout: 255 seconds]
galois_d_ has quit [Client Quit]
bedeho has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
bwerthmann has joined #ipfs
dimitarvp has quit [Ping timeout: 268 seconds]
koya[m] has joined #ipfs
Encrypt has quit [Quit: Quit]
galois_dmz has joined #ipfs
john3 has joined #ipfs
espadrine` has quit [Ping timeout: 240 seconds]
ialz has joined #ipfs
dimitarvp has joined #ipfs
R-Yo[m] has left #ipfs ["User left"]
<charlienyc[m]> Any news on Kurdish or Arabic?
dimitarvp` has joined #ipfs
dimitarvp has quit [Disconnected by services]
dimitarvp` is now known as dimitarvp
maxlath has joined #ipfs
koya[m] has left #ipfs ["User left"]
<flyingzumwalt> kurdish snapshot is up. arabic is stalled because there's something wrong with the kiwix ZIM file
<flyingzumwalt> I haven't had a chance to publish an update on the blog
<flyingzumwalt> We started listing the hashes in the GH repo at https://github.com/ipfs/distributed-wikipedia-mirror/blob/master/snapshot-hashes.yml
<flyingzumwalt> as you can see, en.wikipedia.org snapshot is also up. https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco
shizukesa is now known as shizy
windsok has joined #ipfs
Encrypt has joined #ipfs
bedeho has quit [Remote host closed the connection]
espadrine has joined #ipfs
bedeho has joined #ipfs
tilgovi has joined #ipfs
jmill has joined #ipfs
vgeds has joined #ipfs
katamori has joined #ipfs
galois_dmz has quit [Remote host closed the connection]
enzo has joined #ipfs
galois_dmz has joined #ipfs
enzo is now known as Guest76332
bwerthmann has quit [Ping timeout: 240 seconds]
jaboja has joined #ipfs
vgeds has quit [Read error: Connection reset by peer]
vgeds has joined #ipfs
kegan has quit [Ping timeout: 240 seconds]
vgeds has quit [Client Quit]
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
<dgrisham> flyingzumwalt: do you know the size difference (due to deduplication/etc) between the 'raw' english wikipedia and ipfs miror?
<dgrisham> mirror*
<whyrusleeping> if you were to save wikipedia to disk, i think it was like 700GB
<whyrusleeping> and 240 or so in IPFS
<whyrusleeping> that said, wikipedias backend stuff does that duplication too for the most part
mildred4 has quit [Read error: No route to host]
mildred1 has quit [Read error: No route to host]
mildred1 has joined #ipfs
mildred4 has joined #ipfs
bedeho has quit [Remote host closed the connection]
webdev007 has quit [Ping timeout: 268 seconds]
<katamori> beginner question: anyone knows what does the error "merkledag: not found" means? I mean its obv what it means but why can I possibly get this error if the file or dir in fact exists?
<katamori> I'm trying to get a file hosted locally from a remote hosting service
bedeho has joined #ipfs
<whyrusleeping> katamori: your daemon is probably not running :)
<katamori> it does, I can use commands like "swarm peers"
<whyrusleeping> hrm... odd
<whyrusleeping> does it return not found immediately?
<katamori> on the other hand, I'm using tmux, is it possible it screws things up?
<katamori> not immediately
<lemmi> katamori: what command are you usung?
<katamori> ipfs get
bedeho has quit [Ping timeout: 255 seconds]
<lemmi> hm.. i had that message when toying around with ipfs object and filstore hashes
<katamori> then its indeed odd, but I'va already had plenty of issues with this aws vm anyways
m3lt has joined #ipfs
kvda has joined #ipfs
<katamori> another question: how can I list "local files" in a fancy way like the webui does? so, now showing all hashes, but only directly pinned files and/or directories
<katamori> how can I do that from a terminal?
<lemmi> ipfs pin ls -t recursive
<lemmi> and maybe direct aswell
<katamori> perfect "recursive" works for me, thanks
kegan has joined #ipfs
Encrypt has quit [Quit: Quit]
vapid has quit [Ping timeout: 245 seconds]
vapid has joined #ipfs
reit has joined #ipfs
galois_d_ has joined #ipfs
jrahmy_ has joined #ipfs
galois_dmz has quit [Ping timeout: 240 seconds]
m3tti has joined #ipfs
reit has quit [Quit: Leaving]
ShalokShalom has quit [Remote host closed the connection]
bedeho has joined #ipfs
chungy has joined #ipfs
sein has joined #ipfs
sein is now known as Guest39223
Guest39223 has quit [Remote host closed the connection]
seinthebear has joined #ipfs
seinthebear has quit [Remote host closed the connection]
archpc has quit [Ping timeout: 260 seconds]
archpc has joined #ipfs
mildred1 has quit [Ping timeout: 268 seconds]
JayCarpenter has joined #ipfs
bedeho has quit [Remote host closed the connection]
droman has quit [Ping timeout: 240 seconds]
tilgovi has quit [Ping timeout: 240 seconds]
droman has joined #ipfs
<katamori> theoretical question: ipfs can be used for versioning, that's true, but if the list of hashes of the versions is somehow deleted or lost or just unavailable, doesn't it mean that it becomes on a human-scale impossible to retrieve that file?
<katamori> not sure if you can follow
<katamori> *to retrieve those versions
bedeho has joined #ipfs
<katamori> I mean, if I forget or lose the hash to some important file, it's nearly lost, right?
<whyrusleeping> katamori: yeah, for the most part, if you forget a hash, and lose the content, its generally gone
<emunand[m]> what if you use something like ipfs-search?
<emunand[m]> if you forget the hash
<charlienyc[m]> Wouldn't that require someone else to be hosting the content?
<katamori> well, yes, something like "the Google of IPFS" would be come at one point
captain_morgan has quit [Read error: Connection reset by peer]
<emunand[m]> someone else or yourself, since it uses the DHT t ofind who has what
captain_morgan has joined #ipfs
<katamori> question is that: can you retrieve the extension of a file without attached meta informations?
<charlienyc[m]> Presumably, you would know if you stored it as a PDF or doc or txt
<charlienyc[m]> , right?
jaboja has quit [Ping timeout: 255 seconds]
<whyrusleeping> Yeah, with something like ipfs-search, *you* may forget the hash, but ipfs-search remembers it
<whyrusleeping> as long as someone somewhere remembers the hash, its not lost
<katamori> charlienyc: sure but your program, if attempts to search through the network, should have a method to find out on its own
<katamori> I know that there's an IPFS searching stuff somewhere but that's nowhere as usable as HTTP search engines so far
<katamori> so I really heavily wonder that I should start trying something
grumble has quit [Quit: Uncaught java.lang.NullPointerException occurred in IRCConnection.java:59]
grumble has joined #ipfs
jrahmy_ has quit [Remote host closed the connection]
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 240 seconds]
Mateon3 is now known as Mateon1
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
m3tti has quit [Ping timeout: 268 seconds]
ygrek_ has joined #ipfs
bedeho has quit [Remote host closed the connection]
asyncsec has quit [Quit: asyncsec]
jkilpatr has quit [Ping timeout: 260 seconds]
asyncsec has joined #ipfs
cranky-sleep has quit [Ping timeout: 246 seconds]
espadrine has quit [Ping timeout: 240 seconds]
jkilpatr has joined #ipfs
shizy has quit [Ping timeout: 240 seconds]
kegan_ has joined #ipfs
bedeho has joined #ipfs
neurrowcat has joined #ipfs
ashark has quit [Ping timeout: 260 seconds]
kegan has quit [Ping timeout: 258 seconds]
ryantm has joined #ipfs
asyncsec has quit [Quit: asyncsec]
bedeho has quit [Ping timeout: 240 seconds]
asyncsec has joined #ipfs
webdev007 has joined #ipfs
<kevina> whyrusleeping: is there is an easy way to count the maxium number of open files in a go program
<kevina> or a system tool that will do it for the process
maxlath has quit [Quit: maxlath]
<kevina> something like "time ipfs ..." but which will output the maxium number of open file desriptors at one time
<lgierth> kevina: ulimit -n?
<kevina> that will limit it, not measure it
<lgierth> ah, the current number
<lgierth> so, /proc/sys/something
<kevina> I know about /proc, that not doing what I want, I want to count the max
<lgierth> i'm clueless what you mean :)
<whyrusleeping> kevina: yeah, there was lsof
<lgierth> ah, actually count them yourself, not get the count?
<whyrusleeping> kevina: or do you want to record a high water mark?
<kevina> yes, record the high water mark
<whyrusleeping> hrm...
<kevina> i.e. what the maxium would of needed to be
<kevina> ulimit with progressively lower values seams to be the best approach I can think of
<whyrusleeping> hrm...
<whyrusleeping> maybe something with fsnotify?
<dsal> I am not able to resolve my name. What makes these things go away?
<whyrusleeping> dsal: if your daemon is offline for a long enough time it won't be resolveable in the network
<dsal> My node was probably down for a few days. Shouldn't it remember stuff when it comes back?
<dsal> But like, locally after bringing it back up?
<dsal> Ooh
<dsal> Like, there has to be consensus to publish it in the first place, right?
<whyrusleeping> yeah, the semantics are weird
<whyrusleeping> we don't assume you are the only owner of the key
<dsal> OK. So this is "normal" then
<whyrusleeping> i'm thinking we should change that assumption to be an option
<whyrusleeping> so you could by default assume that you are the authority on the matter
<whyrusleeping> unless a certain flag is set
<katamori> flag?
<whyrusleeping> like, a flag when you publish, or something
<whyrusleeping> ipfs name publish --someone-else-might-also-own-this-key
<dsal> How do you transplant a key?
<dsal> I actually would like that.
<whyrusleeping> your nodes default key you can't, but you can make new keys with `ipfs key`
<whyrusleeping> and they are saved in $IPFS_PATH/keys
<dsal> It'd be really nice if I could publish from my laptop and do keepalives from a machine at home.
<whyrusleeping> yeah, we don't have that figured out
<dsal> So that blob can live anywhere?
<whyrusleeping> the keepalives of content other people publish is something that we need to write still
<whyrusleeping> yeah, that blob can live anywhere
<dsal> Neat. I may do that.
<whyrusleeping> and you can publish with it from anywhere
<whyrusleeping> but you'll need a cron job to do it until we get it wired up properly
<dsal> The only thing I really need to do is have a key that's the "stuff that's interesting" key so I can make sure the machine at home pins all the things.
<dsal> With an authoritative list, it could also unpin the old things.
<katamori> speaking of to-do: what feature is the current focus of prometheus? the repo is a bit messy for me to tell for sure
<SchrodingersScat> dsal: I've been using something like this to sync mine, https://ipfs.io/ipfs/QmZEiBe6foVBHrXoFWEUSRKGp8tiBDzDh3v47Aa4MHwG1a/ipfs-mirpin.bash
<SchrodingersScat> oh, you may be talking about something completely different
<dsal> SchrodingersScat: I was thinking of just putting the hashes in a file addressable by IPNS and let them do it themselves.
<SchrodingersScat> yeah, sure
<whyrusleeping> katamori: what do you mean by that?
<katamori> I mean, I assume there's a huge list of to-do
<katamori> is it known which item of this list is currently being done?
<whyrusleeping> katamori: yeah, so the way i've been structuring it
<whyrusleeping> every issue on the go-ipfs repo should be a "todo"
<whyrusleeping> once its marked "help wanted" that means its ready for someone to do it
<SchrodingersScat> what do they call that though, when there's a certain goal timeframe?
<whyrusleeping> things that are higher priority are marked under the 'next release' (0.4.10 in this case)
<whyrusleeping> milestore
<whyrusleeping> and things that are in progress have the issue assigned to a person, and generally have a PR open
<whyrusleeping> SchrodingersScat: what do you mean?
<katamori> thanks, already checking the labeled issues
<SchrodingersScat> whyrusleeping: doesn't matter. I think we're past it.
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<katamori> also, another idea - maybe not new but the permanent nature of the network made me wonder
jaboja has joined #ipfs
<katamori> so, for my shitty frontend code, I'd consider keeping every version with a hash some kind of waste (but I'll do it one day anyway), because it's not intended to be permanent in its form
<katamori> however, there are data structures that are (transaction data, statistics, gov stuff and so on)
kvda has joined #ipfs
<katamori> so, what if, instead of using folders, one would only upload the possible smallest, atomic, permanent forms of some sort of data (again, statistics, for example) and full them together with a "collector" script
<katamori> unless frequent file access is an issue, it sounds like a fairly good method
rcat has quit [Remote host closed the connection]
anewuser has joined #ipfs
jaboja64 has joined #ipfs
bedeho has joined #ipfs
skeuomorf has quit [Ping timeout: 246 seconds]
cranky-sleep has joined #ipfs
bedeho has quit [Ping timeout: 240 seconds]
jaboja64 has quit [Quit: Leaving]
anewuser has quit [Ping timeout: 240 seconds]
droman has quit []
<M-iav> The connection was refused when attempting to contact ipfs.pics.
<M-iav> The connection was refused when attempting to contact https://ipfs.pics.
cxl000 has quit [Quit: Leaving]
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Danny has joined #ipfs
archpc has quit [Quit: Alt-F4 at console]
Danny is now known as Guest85553
archpc has joined #ipfs
sprint-helper has quit [Remote host closed the connection]
katamori has quit [Ping timeout: 260 seconds]
gmoro has joined #ipfs
bedeho has joined #ipfs
gmoro_ has quit [Ping timeout: 246 seconds]
bedeho has quit [Ping timeout: 240 seconds]
infinity0_ has joined #ipfs
infinity0 has quit [Ping timeout: 268 seconds]
infinity0_ has quit [Changing host]
infinity0 has joined #ipfs