lgierth changed the topic of #ipfs to: go-ipfs v0.4.7 is out! https://dist.ipfs.io/go-ipfs | Week 11+12: Documentation https://git.io/vyHgk | Roadmap: https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/vVBS0
<dryajov> whyrusleeping: sorry I don’t think I ever answered your question on monorepos :D. I guess you're asking whats the monorepo for in the case of the js ipfs projects? I personally don’t like the idea of the monorepos, pretty much because of the same reasons discussed here - https://github.com/ipfs/community/issues/174, but I definitelly feel the pain of
<dryajov> workflow for developing is. There was a little discussion started in this issue - https://github.com/ipfs/js-ipfs/pull/740#issuecomment-288795175.
<dryajov> have a monorepo. I was wondering if it could work with git subtrees/sub modules, I guess there is no reason why it couldn’t, and it might make life a bit easier… victorbjelkholm pointed to a repo where he uses a similar tool, but I’d be curious to see how lerna handles it with subtrees/submodules… Botom line, jut trying to figure out what the best
<dryajov> working with the gazilion modules in the js ipfs repos. Nothing wrong with how they are split, it just becomes a bit cumbersome to manage after a certain point. There should be an easy way of versioning/updating/releasing/managing packages across several repos. Lerna (https://github.com/lerna/lerna) seems to do just that, but they seem to assume that you
<whyrusleeping> dryajov: ah!
<whyrusleeping> gotcha
<whyrusleeping> On the go side i've been working on a tool called gx-workspace
<whyrusleeping> (well, a subcommand for the tool anyways)
Guest187693[m] has joined #ipfs
<dryajov> whyrusleeping: ah, that looks really cool… yeah thats pretty much what I’m thinking of
<whyrusleeping> When you need to update a package, you start a new update process and list the packages you want updated
<whyrusleeping> and then you run 'gx-workspace update next'
<whyrusleeping> and it iterates through everything it needs to update in order and does the updates (allowing you to stop and check things at any point)
<whyrusleeping> it still needs a lot of UX work though
<whyrusleeping> so i'im interested in what youre doing
spacebar_ has quit [Quit: spacebar_ pressed ESC]
<dryajov> whyrusleeping: updating a versions in package.json can become a real pain… specially when its a package being used by everything else, like multiaddr for example… it be nice to have something that tracks the version and can update deps, version and release in bulk
<whyrusleeping> Hah
<whyrusleeping> Yeah, thats my current hell hole
<dryajov> curious if you run into the same issues in the go side?
<dryajov> :D
<whyrusleeping> We make protocols pluggable to avoid haivng to update go-multiaddr often: https://github.com/multiformats/go-multiaddr/blob/master/protocols.go#L64
<dryajov> yeah, usually you don’t have to, semver will mostly take care of that for you, but major versions have to be bumped manually
<whyrusleeping> Yeah... We lock versions down strictly
<whyrusleeping> by hash
<whyrusleeping> I dont beleive in semver
<dryajov> every version is a major version :D
<Mateon1> Semver is only good if it's actually enforced
<dryajov> yeah, it has its pros and cons… mostly works, but you can screw yourself up
leeola has quit [Quit: Connection closed for inactivity]
<dryajov> if your not carefull with the modifiers
<Mateon1> I remember that some language (Elm? Rust?) forces semver version change if the API changes
Aranjedeath has joined #ipfs
gmoro has quit [Ping timeout: 268 seconds]
<dryajov> ah, thats interesting… havent looked too much into either
<dryajov> monorepos tho remind me of my days of C++
<dryajov> you just throw everything in the same tree and then wait half a day every time you need to pull/push something :D
<Mateon1> IPFS itself is guilty of ignoring Semver
galois_d_ has quit [Remote host closed the connection]
<dryajov> but… that was pre git/mercurial… so might be different now
cemerick has joined #ipfs
<Mateon1> 0.4.5 changed api/v0/swarm/peers, which broke stuff. It also changed the on-disk repo format
galois_dmz has joined #ipfs
<whyrusleeping> Yeah, we don't use semver
<whyrusleeping> or if we do, we use vanity semver
<whyrusleeping> and leave off the patch version
<whyrusleeping> '4' is the major version, '7' is the minor
<dryajov> whyareyousleeping: yeah, thats sane
<Mateon1> `elm-package` will bump versions for you, automatically enforcing these rules
<Mateon1> I wish more langs had that
<Mateon1> Also, elm-package diff for seeing changes in APIs
<dryajov> whyrusleeping: I saw that you did some work with clearcase at some point :), I had to work with it back in 2005, over a VPN from Costa Rica, I can only tell you, it was not fun :D I think there were separate versions for LAN and WAN for wathever that meant, we were on the LAN VPN from central america to marryland… oh boy...
<whyrusleeping> oh god
<whyrusleeping> i thought i could forget about that dark chapter
<dryajov> lol, sorry for bring that up :D
<dryajov> your one of the few people I’ve seen that had to deal with it...
<dryajov> that was back in 2005-2008
<whyrusleeping> I'm sorry for your loss
<whyrusleeping> the company i worked for was trying to migrate to git
<whyrusleeping> and I had to maintain a bidirectional bridge between git and clearcase
<whyrusleeping> because some developers refused to use git
<dryajov> lol… yeah I used to commit and go home, come back the next day and it would still be commiting, I was commit 2 or 3 files at a time :D
<dryajov> oh lord
<dryajov> yeah… git brought up some resistance… specially with the enterprice folks
<whyrusleeping> yeah, they said it was hard to use
<whyrusleeping> while using clearcase...
* whyrusleeping had to stop and think about that for a while
<dryajov> haha… yeah
<dryajov> the only thing I can kinda rescue was the windows explorer integration… nothing else did it as well as they did it back in then…
<whyrusleeping> True
<dryajov> yeah if you used their client or the command line, then god have mercy on you
<dryajov> :D
<dryajov> lol to the video :D
matoro has quit [Ping timeout: 260 seconds]
matoro has joined #ipfs
spacebar_ has joined #ipfs
Guest182011[m] has joined #ipfs
M-sol56 has joined #ipfs
robattila256 has quit [Quit: WeeChat 1.7]
cemerick has quit [Ping timeout: 246 seconds]
cemerick has joined #ipfs
reit has joined #ipfs
dmr has joined #ipfs
dmr has quit [Remote host closed the connection]
dmr has joined #ipfs
dmr has quit [Changing host]
dmr has joined #ipfs
athan has joined #ipfs
MrControll has quit [Quit: Leaving]
dmr has quit [Ping timeout: 264 seconds]
dawny has quit [Read error: Connection reset by peer]
IRCFrEAK has joined #ipfs
IRCFrEAK has quit [Remote host closed the connection]
bwn has quit [Ping timeout: 240 seconds]
shizy has joined #ipfs
spacebar_ has quit [Quit: spacebar_ pressed ESC]
IRCFrEAK has joined #ipfs
IRCFrEAK has quit [K-Lined]
IRCFrEAK has joined #ipfs
Shatter has joined #ipfs
IRCFrEAK has left #ipfs [#ipfs]
realisation has joined #ipfs
IRCFrEAK has joined #ipfs
IRCFrEAK has quit [K-Lined]
realisation has quit [Max SendQ exceeded]
tmg has quit [Ping timeout: 260 seconds]
Allonphone has joined #ipfs
bwn has joined #ipfs
IRCFrEAK has joined #ipfs
Allonphone has quit [Client Quit]
IRCFrEAK has left #ipfs [#ipfs]
Akaibu has joined #ipfs
horrified has quit [Ping timeout: 246 seconds]
Akaibu has quit []
Akaibu has joined #ipfs
IRCFrEAK has joined #ipfs
Akaibu has quit [Client Quit]
shizy has quit [Ping timeout: 240 seconds]
Akaibu has joined #ipfs
mguentner has quit [Quit: WeeChat 1.7]
IRCFrEAK has left #ipfs [#ipfs]
mbags has quit [Quit: Leaving]
mguentner has joined #ipfs
horrified has joined #ipfs
zabirauf_ has quit [Ping timeout: 240 seconds]
<whyrusleeping> If anyone has a vps (or other machine with a public IP and a fast pipe) that has some spare RAM, running https://github.com/ipfs/dht-node will help reduce the overall bandwidth load on the network, and also generally improve performance
edrex has quit [Remote host closed the connection]
<lemmi> whyrusleeping: how hungry is that tool?
<lemmi> i have tons of bandwidth to spare but not necessarily ram and cpu
<whyrusleeping> lemmi: its not super CPU hungry, but it wants about 80MB of ram
<lemmi> that's ok
DiCE1904 has quit [Read error: Connection reset by peer]
<whyrusleeping> I'm running a single node and its using 61MB
<whyrusleeping> and then i'm also running with -many=10 (run ten nodes) and its taking between 140MB and 280MB
<whyrusleeping> it seems that running multiple nodes is more efficient for some reason
<whyrusleeping> It outputs its memory usage too, so you can tweak it
<lemmi> alright. i'll give it a try
<whyrusleeping> sweet, thanks :)
IRCFrEAK has joined #ipfs
<whyrusleeping> Once we get better transports (like QUIC) implemented, everything will take a lot less memory
<whyrusleeping> (hopefully)
DiCE1904 has joined #ipfs
DiCE1904 has quit [Changing host]
DiCE1904 has joined #ipfs
<lemmi> ah shoot. https://github.com/libp2p/go-testutil has gx dependencies. so simple go get doesn't do it :)
gmcabrita has quit [Quit: Connection closed for inactivity]
IRCFrEAK has left #ipfs [#ipfs]
<whyrusleeping> wait what
<whyrusleeping> I just go get installed it....
<lemmi> building on arm fwiw
<whyrusleeping> lemmi: are you sure? Try `go get -u`
<whyrusleeping> you may have an older cached version from before i fixed all the gx libp2p things
elimiste1e is now known as elimisteve
<lemmi> ah /home/lemmi/go/src/github.com/libp2p/go-libp2p-peer breaks
<whyrusleeping> print error messages?
<whyrusleeping> it will work if you do the gx install stuff, but we're putting a lot of effort into making sure everything still remains go gettable
mguentner2 has joined #ipfs
<whyrusleeping> hrm... something somewhere has gx deps...
<whyrusleeping> try running `go get -u github.com/ipfs/dht-node`
<whyrusleeping> It should pull latest master for each repo
<lemmi> hm.. probably something else. libp2p doesn't import anything with gx
<bwn> exact same messages here
<lemmi> whyrusleeping: did that just now.
<lemmi> whyrusleeping: (with -v -x obviously)
* whyrusleeping investigates further
<whyrusleeping> bwn: thanks
<lemmi> whyrusleeping: ipfs/dht-node/ui.go: human "gx/ipfs/QmPSBJL4momYnE7DcUyk2DVhD6rH488ZmHBGLbxNdhU44K/go-humanize"
mguentner has quit [Ping timeout: 256 seconds]
<whyrusleeping> lemmi: got it, thanks :)
* whyrusleeping was definitely looking in the wrong places
<lemmi> grep -r gx/ipfs did it :)
dimitarvp has quit [Quit: Bye]
<whyrusleeping> ah great :)
<whyrusleeping> I got caught fixing another dependency, lol
benthor has quit [Ping timeout: 260 seconds]
zabirauf has joined #ipfs
<lemmi> ok, up and running
tilgovi has quit [Ping timeout: 246 seconds]
<whyrusleeping> lemmi: great!
<whyrusleeping> I'm provisioning a new vps to run more of these :D
<lemmi> whyrusleeping: what kind of figures should i expect wrt connections and provider records (<- what is that btw)?
<lemmi> currently i'm hovering around 280 connections and >2k records
<whyrusleeping> A provider record is a note that youre storing for the network that says "Peer X has hash Y"
<whyrusleeping> 280 connections seems reasonable, You might see it go up to 400 at times though
<whyrusleeping> provider records will go up steadily, records expire in 24 hours, and a garbage collection is run every hour
<lemmi> ok. why does runnung more nodes on the same machine help?
<whyrusleeping> so after a day you should get a stable number
<lemmi> ok
<whyrusleeping> it helps because of the way kademlia works, you try to connect to the 20 closest peers to a given query
<whyrusleeping> if those 20 peers are all on really fast connections, then that query goes faster
IRCFrEAK has joined #ipfs
<whyrusleeping> so by running more peers on the same machine, you spread out more on the dht key space
<lemmi> ah. so and if the server is 19 of that 20 connections it helps
ribasushi has quit [Ping timeout: 260 seconds]
<whyrusleeping> Yeap!
<whyrusleeping> and the more people we have running these, the more likely all 20 of those are fast connections
ribasushi has joined #ipfs
palkeo has joined #ipfs
palkeo has joined #ipfs
palkeo has quit [Changing host]
ribasushi has quit [Quit: So long and thankful is the fish...]
ribasushi has joined #ipfs
warner has joined #ipfs
<lgierth> oh no :/
<lgierth> AddDir in go-ipfs-api is broken
Akaibu has quit [Quit: Connection closed for inactivity]
palkeo has quit [Quit: Konversation terminated!]
arkimedes has quit [Ping timeout: 240 seconds]
IRCFrEAK has quit [Ping timeout: 240 seconds]
rendar has joined #ipfs
mentos1386 has joined #ipfs
cemerick has quit [Ping timeout: 246 seconds]
mentos1386 has quit [Ping timeout: 240 seconds]
horrified has quit [Quit: brb weechat memory leak]
horrified has joined #ipfs
mentos1386 has joined #ipfs
mildred2 has quit [Read error: Connection reset by peer]
mildred2 has joined #ipfs
mentos1386 has quit [Quit: mentos1386]
Caterpillar has joined #ipfs
ylp has joined #ipfs
inetic has joined #ipfs
s_kunk has quit [Ping timeout: 246 seconds]
espadrine has joined #ipfs
A124 has quit [Quit: '']
A124 has joined #ipfs
A124 has quit [Client Quit]
cxl000 has joined #ipfs
A124 has joined #ipfs
A124 has quit [Client Quit]
A124 has joined #ipfs
athan has quit [Ping timeout: 246 seconds]
athan has joined #ipfs
arpl has joined #ipfs
s_kunk has joined #ipfs
gmcabrita has joined #ipfs
ShalokShalom has quit [Remote host closed the connection]
[BT]Brendan is now known as brendyn
s_kunk has quit [Read error: Connection reset by peer]
s_kunk has joined #ipfs
thomersch has joined #ipfs
ecloud_wfh is now known as ecloud
thomersch has quit [Client Quit]
M386dxturbo[m] has joined #ipfs
gde33 has quit [Remote host closed the connection]
gde33 has joined #ipfs
tmg has joined #ipfs
_rht has joined #ipfs
gmoro has joined #ipfs
Dunkhan has joined #ipfs
mildred2 has quit [Ping timeout: 246 seconds]
Boomerang has joined #ipfs
rcat has joined #ipfs
<Bloo[m]> Is there anything like IPFS (or does IPFS even do this?) that would protect peers while downloading files?
<Bloo[m]> for example plausable deniability for the peers downloading files, like in the case of DMCA torrent trolls
<KheOps> I don't know if IPFS does it, but one way of doing it would be to make sure that people seeed a bit of everything, including things they have not asked
<KheOps> So that the fact your seeding or downloading something does not necessarily implied that you deliberately requested it
<r0kk3rz> KheOps: i dont think that would help, actually it would make it worse
<KheOps> That's what Freenet does, except with Freenet you cannot easily know which blocks you are hosting. But you're taking part in storing things that you don't want, knowingly, since you're using Freenet
Boomerang has quit [Quit: Lost terminal]
<Bloo[m]> Do you think there will be any plans for this moving forward?
<Bloo[m]> I think it could end up being a pretty important feature to prevent people from being scared to look at sensitive documents for example
<KheOps> No idea, it's a really tricky question :) And what I suggested may or may not be considered safe, depending on how the adversary sees things
thomersch has joined #ipfs
<r0kk3rz> Bloo[m]: i havent seen any plans about such things. at the moment using findprovs to identify seeders of content is trivial
<Bloo[m]> KheOps: Indeed... Not sure what a viable solution to such an issue would be other than something like making people use Tor but for some files that would be unfair on the Tor network
thomersch has quit [Client Quit]
thomersch has joined #ipfs
thomersch has quit [Client Quit]
thomersch has joined #ipfs
A124 has quit [Quit: '']
A124 has joined #ipfs
google77 has joined #ipfs
<google77> hi
thomersch has quit [Quit: thomersch]
google77 has quit [Quit: leaving]
maxlath has joined #ipfs
andrejc has joined #ipfs
Boomerang has joined #ipfs
andrejc has quit [Client Quit]
A124 has quit [Quit: '']
A124 has joined #ipfs
thomersch has joined #ipfs
jchevalay has joined #ipfs
<Dunkhan> wasn't that the bill that authorises isps to sell all your data?
<jchevalay> yep exactly
<jchevalay> it's very intrusive
<daviddias> All the new goodies here: https://github.com/ipfs/js-ipfs/issues/795
<daviddias> Try it out a let us know what you think! :D
tmg has quit [Quit: leaving]
<r0kk3rz> jchevalay: you have to wonder if this will introduce more competition in the isp sector
maxlath has quit [Ping timeout: 240 seconds]
JayCarpenter has joined #ipfs
kthnnlg has joined #ipfs
clavi has quit [Quit: ZNC - http://znc.in]
<jchevalay> @daviddias good joob :)
maxlath has joined #ipfs
<jchevalay> There is a new application that uses ipfs from orbit
Akaibu has joined #ipfs
<haad> on that note, if y'all have a minute, try out the new release of Orbit at https://orbit.chat, we released a massive update to it.
<haad> and congrats on the new js-ipfs release, it's beautiful :)
clavi has joined #ipfs
<jchevalay> @haad good jobs =)
genq has quit [Ping timeout: 246 seconds]
espadrine has quit [Ping timeout: 258 seconds]
<Dunkhan> it probably will
DiCE1904 has quit [Quit: Textual IRC Client: www.textualapp.com]
thomersch has quit [Quit: thomersch]
thomersch has joined #ipfs
_rht has quit [Quit: Connection closed for inactivity]
thomersch has quit [Quit: thomersch]
mildred2 has joined #ipfs
espadrine has joined #ipfs
dimitarvp has joined #ipfs
jkilpatr has quit [Remote host closed the connection]
maxlath1 has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
maxlath1 is now known as maxlath
thomersch has joined #ipfs
thomersch has quit [Client Quit]
thomersch has joined #ipfs
ashark has joined #ipfs
r0kk3rz has quit [Quit: WeeChat 1.4]
r0kk3rz has joined #ipfs
cemerick has joined #ipfs
thomersch has quit [Quit: thomersch]
<daviddias> jchevalay: sweet, which one? :D
<jchevalay> daviddias features about js-ipfs (tree, cat etc)
Encrypt has joined #ipfs
<victorbjelkholm> !pin QmYHJUnhVSn9exDtqrfvANeogHe9qUgVoUE1UByENuFFZR blog
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmYHJUnhVSn9exDtqrfvANeogHe9qUgVoUE1UByENuFFZR
hoenir has joined #ipfs
<jchevalay> daviddias i have an question about js-ipfs can we use that in background process electron app or not
<daviddias> jchevalay: you can
<daviddias> absolutely
<daviddias> actually, you use it in both
<jchevalay> daviddias because before i attempt to use ipfs-connector with atom apllication but is not compatible
<daviddias> I think this https://github.com/AkashaProject/ipfs-connector is just for go-ipfs
<daviddias> that being said, it is doable with js-ipfs
<daviddias> but if you are using js-ipfs, you don't want to spawn the daemon, you can run everything in process
<jchevalay> daviddias i think this can me control ipfs daemon directly in my atom apps instead of i can directly in my atom background process
esph has quit [Ping timeout: 268 seconds]
<jchevalay> daviddias js-ipfs need go-ipfs run in my host to works or i can works an ipfs daemon with js-ipfs ?
<daviddias> just jsipfs
<jchevalay> daviddias so i can start my ipfs daemon in my background process atom app et stop that when close my application
JayCarpenter2 has joined #ipfs
<daviddias> exactly
esph has joined #ipfs
jkilpatr has joined #ipfs
<noffle> orbit is looking slick haad
maxlath has quit [Ping timeout: 240 seconds]
<jchevalay> daviddias and if restart my apps i get my old datas or create a new dag ?
<jchevalay> daviddias and if restart my apps i get my old datas or create a new repo?
<daviddias> if you **delete** your local repo, then you have to fetch or recreate the state
<daviddias> jchevalay: try it out :)
<jchevalay> daviddias no just when stop my atom application and stop my ipfs node my datas are persist into local ipfs repository ?
<daviddias> stopping a node is not deleting your datas
JayCarpenter2 has quit [Ping timeout: 260 seconds]
<daviddias> it is analogous to `ipfs daemon` and stopping the daemon
<daviddias> or `ipfs init` and `rm -r ~/.ipfs`
maxlath has joined #ipfs
anewuser_ has quit [Quit: anewuser_]
<jchevalay> daviddias thx for this information now i have a lot of works to implement that into my project :p
<daviddias> you bet :)
<daviddias> keep us posted!
maxlath has quit [Ping timeout: 240 seconds]
cemerick has quit [Ping timeout: 246 seconds]
anewuser has joined #ipfs
subtraktion has joined #ipfs
anewuser has quit [Remote host closed the connection]
ecloud has quit [Ping timeout: 256 seconds]
anewuser has joined #ipfs
jkilpatr has quit [Ping timeout: 260 seconds]
maxlath has joined #ipfs
ecloud has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
dimitarvp` has joined #ipfs
maxlath has joined #ipfs
<sprint-helper1> The next event "docs standup" is in 15 minutes.
dimitarvp has quit [Ping timeout: 268 seconds]
kthnnlg has quit [Remote host closed the connection]
jchevalay has quit [Quit: Page closed]
maxlath has quit [Ping timeout: 240 seconds]
<JayCarpenter> Zoom?
<lgierth> ok i'm here now
<lgierth> making zoom room
Boomerang has quit [Quit: Lost terminal]
<lgierth> JayCarpenter: sorry, new room https://zoom.us/j/666565603
ylp has quit [Quit: Leaving.]
iav_ has joined #ipfs
kthnnlg has joined #ipfs
JayCarpenter has quit [Quit: Page closed]
inetic has quit [Ping timeout: 240 seconds]
shizy has joined #ipfs
anewuser has quit [Quit: anewuser]
kthnnlg has quit [Remote host closed the connection]
iav_ has quit []
keith_analog has joined #ipfs
<keith_analog> Hi All, After upgrading from version 0.4.6 to 0.4.7, I am again having a problem with committing large files. I am working on a nixos linux machine. The problem is related to a limit on # of open file handles. However, I had fixed this problem previously, and it should be fine now. Here's the output "
<keith_analog> ipfs add -r pbbs-pctl-data/
<keith_analog> 64.00 MB / 52.40 GB [>-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 0.12% 14m2s17:52:29.679 ERROR commands/h: open /external-drive/ipfs-repo/blocks/OQ: too many open files client.go:247
<keith_analog> Error: open /external-drive/ipfs-repo/blocks/OQ: too many open files
<keith_analog> When I check the max nbr of open files allowed on my system, I get:
<keith_analog> $ cat /proc/sys/fs/file-max
<keith_analog> 1623445
<keith_analog> Any idea what the problem might be?
anewuser has joined #ipfs
ianopolous has quit [Ping timeout: 240 seconds]
<Kubuxu> keith_analog: yeah, it is know our fix to DHT caused more connections to be established, run ipfs daemon with `IPFS_FD_MAX=4096` env var
cemerick has joined #ipfs
<Kubuxu> or manually `ulimit -n 4096; ipfs daemon --manage-fdlimit=false`
<keith_analog> Kubuxu: super, thankx
cemerick has quit [Ping timeout: 246 seconds]
hoenir_ has joined #ipfs
hoenir has quit [Ping timeout: 246 seconds]
matoro has quit [Ping timeout: 260 seconds]
mbags has joined #ipfs
<victorbjelkholm> keith_analog: https://github.com/ipfs/notes/issues/212 contains some useful tips when adding large amount of files too
matoro has joined #ipfs
<lemmi> whyrusleeping: i took the liberty and replaced leveldb with a map. i can't tell the difference memorywise, but performance is better
<whyrusleeping> lemmi: on dht-node?
<lemmi> whyrusleeping: yep
<whyrusleeping> ah, nice
<whyrusleeping> I was afraid doing that would destroy memory usage
<lemmi> maybe worth a cmdline option
<whyrusleeping> yeah, i was just gonna suggest that
<lemmi> whyrusleeping: running 1.5h with 450 and 20k records @ 50-70mb memory usage (golang gc seems to run very frequently though)
<whyrusleeping> Yeah, we have a lot of work to do on allocating less
<whyrusleeping> memory volatility can be expensive
<dryajov> if anyone is interested in jumping the circuit discussion to provide feedback/ideas its currently going on here - https://github.com/libp2p/js-libp2p-circuit/issues/4, I’d really apretiate some feedback review to make sure we’re on the right track
seagreen_ has quit [Ping timeout: 260 seconds]
arpu has quit [Ping timeout: 268 seconds]
sprint-helper has joined #ipfs
sprint-helper1 has quit [Read error: Connection reset by peer]
keith_analog has quit [Quit: Konversation terminated!]
dtz has joined #ipfs
hoenir_ has quit [Remote host closed the connection]
arpu has joined #ipfs
cemerick has joined #ipfs
s_kunk has quit [Ping timeout: 256 seconds]
matoro has quit [Ping timeout: 258 seconds]
Encrypt has quit [Quit: Quit]
espadrine has quit [Ping timeout: 258 seconds]
jkilpatr has joined #ipfs
bwn has quit [Ping timeout: 246 seconds]
Shatter has quit [Ping timeout: 240 seconds]
undiscerning has joined #ipfs
bwn has joined #ipfs
azdle has joined #ipfs
cemerick has quit [Ping timeout: 240 seconds]
bwn has quit [Ping timeout: 268 seconds]
espadrine has joined #ipfs
rendar has quit [Ping timeout: 268 seconds]
bwn has joined #ipfs
nullstyle has quit [Ping timeout: 255 seconds]
nullstyle has joined #ipfs
xming_ has quit [Ping timeout: 255 seconds]
xming has joined #ipfs
xming has joined #ipfs
Muis has quit [Ping timeout: 255 seconds]
Muis has joined #ipfs
tilgovi has joined #ipfs
<Kubuxu> lidel: I see you are following discourse closely, feel free to ping me with questions asked there if they get no response.
<Kubuxu> I try to visit it, but I am bad at it.
<captain_morgan> if I'm pinning a very large oobject, what will happenn if I restart my daemon?
<captain_morgan> will it continue or have to restart?
<captain_morgan> the initial pin has not completed
<captain_morgan> hey! only ~10G to go
rendar has joined #ipfs
jkilpatr has quit [Ping timeout: 240 seconds]
<Kubuxu> you have to call pin again
<Kubuxu> the pin itself isn't saved into pinset until fetching is done
<captain_morgan> meaning it will start over?
<r0kk3rz> captain_morgan: it will resume
blacczenith has joined #ipfs
mguentner2 is now known as mguentner
arkimedes has joined #ipfs
<captain_morgan> cool thanks
<captain_morgan> 45 hours in, ~16 to go don't want to loose anything
<captain_morgan> but this computer needs a reboot
<whyrusleeping> captain_morgan: yeah, you won't lose progress, don't worry
<whyrusleeping> all ipfs transfers are completely resumeable
Guest188462[m] has joined #ipfs
matoro has joined #ipfs
mildred2 has quit [Read error: No route to host]
mildred2 has joined #ipfs
mildred1 has joined #ipfs
mildred has quit [Read error: Connection reset by peer]
cemerick has joined #ipfs
<lidel> Kubuxu, cool, will do :)
<captain_morgan> great
<whyrusleeping> lidel: yeah, thank you for pushing towards bringing discourse to life!
<whyrusleeping> feel free to let me know as well if theres something i should answer
<captain_morgan> and do I need ot rerun pin or is the object already pinned but incomplete?
<Kubuxu> just restart the pin command
amosbird has quit [Ping timeout: 260 seconds]
<captain_morgan> awesome, thanks
amosbird has joined #ipfs
cemerick has quit [Ping timeout: 246 seconds]
cemerick has joined #ipfs
Encrypt has joined #ipfs
<lidel> hm.. speaking of Discourse, is this decided, or still ongoing? https://github.com/ipfs/ops-requests/issues/33 :))
matoro has quit [Ping timeout: 240 seconds]
subtrakt_ has joined #ipfs
subtrakt_ has quit [Remote host closed the connection]
matoro has joined #ipfs
subtraktion has quit [Ping timeout: 256 seconds]
spossiba has quit [Quit: Lost terminal]
spossiba has joined #ipfs
matoro has quit [Remote host closed the connection]
nu11p7r has quit [Ping timeout: 240 seconds]
realisation has joined #ipfs
blacczenith has quit [Remote host closed the connection]
s_kunk has joined #ipfs
leeola has joined #ipfs
eater has quit [Ping timeout: 258 seconds]
eater has joined #ipfs
matoro has joined #ipfs
warner` has joined #ipfs
warner has quit [Ping timeout: 264 seconds]
undiscerning has quit [Ping timeout: 260 seconds]
hornfels has joined #ipfs
bwn has quit [Ping timeout: 260 seconds]
arkimedes has quit [Quit: Leaving]
warner` is now known as warner
<lemmi> whyrusleeping: so i don't have any insight as to how the dht node works, but it looks like something is causing greater than linear growth for the latency wrt to records
matoro has quit [Ping timeout: 240 seconds]
<lemmi> i get sub ms latencies with ~1000 records and i'm over 120ms at 10k
cemerick has quit [Ping timeout: 246 seconds]
bwn has joined #ipfs
matoro has joined #ipfs
bwn has quit [Ping timeout: 240 seconds]
matoro has quit [Ping timeout: 240 seconds]
<whyrusleeping> lemmi: is this with the in memory datastore?
<whyrusleeping> lemmi: my node still using leveldb has 115,000 records and 500us latency
<whyrusleeping> 120ms is definitely too high
<whyrusleeping> lemmi: is it swapping?
<lemmi> whyrusleeping: no swapping without leveldb. i'm trying different GOGC values just for fun.
<whyrusleeping> lemmi: heh, alright.
<whyrusleeping> I wonder if its lock contention around the datastore
gigq has quit [Read error: Connection reset by peer]
gigq has joined #ipfs
guest2403 has joined #ipfs
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
bwn has joined #ipfs
Guest170096[m] has joined #ipfs
ashark has quit [Ping timeout: 260 seconds]
matoro has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<AphelionZ> daviddias: sorry about my PR - I want to commit it and i'm actually sitting down to work on it as we speak
<AphelionZ> I have a couple questions if you have a sec
<AphelionZ> or of dignifiedquire
<guest2403> Hi, i've got a quick and simple question too:
<guest2403> If I publish using the same ipfs hash and use the exact same key, i get different ipns hashes as a result. Though both ipns resolve to the same ipfs.
<guest2403> Is it because the peer id is added? In other words, it's not possible to update an ipns hash from different daemons?
Encrypt has quit [Quit: Quit]
maxlath has joined #ipfs
<AphelionZ> hmm how can I use https://github.com/libp2p/js-libp2p-crypto#keys to generate a key with a passphrase?
<AphelionZ> or to supply something like sort seed bits to generate the key from
spacebar_ has joined #ipfs
cemerick has joined #ipfs
realisation has joined #ipfs
nu11p7r has joined #ipfs
Jesin has quit [Quit: Leaving]
Jesin has joined #ipfs
arpl has left #ipfs [#ipfs]
cemerick has quit [Ping timeout: 246 seconds]
ZarkBit has quit [Quit: Going offline, see ya! (www.adiirc.com)]
brainlet is now known as vapid
ashark has joined #ipfs
robattila256 has joined #ipfs
espadrine has quit [Ping timeout: 260 seconds]
ashark has quit [Ping timeout: 264 seconds]
shizy has quit [Ping timeout: 260 seconds]
matoro has quit [Ping timeout: 240 seconds]
maxlath has quit [Ping timeout: 240 seconds]
ZarkBit has joined #ipfs
matoro has joined #ipfs
mikeal has joined #ipfs
<mikeal> so... which module is responsible breaking up a file into multiple parts to be hashed?
maxlath has joined #ipfs
<AphelionZ> is it possible to generate consistent keys from a block of text? so users can regenerate keys?
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
subtraktion has joined #ipfs
guest2403 has quit [Ping timeout: 260 seconds]
<daviddias> AphelionZ: here now
<AphelionZ> daviddias: great :)
<daviddias> AphelionZ: sounds good, better start from the top then, master is a tad different from when you forked it
<AphelionZ> thats what I noticed. I'll make it as ipfs.repo.exists (unless that is already there)
<AphelionZ> i think ipfs.repo.exists makes more sense anyway
<daviddias> there is a ipfs._repo.exists available
<daviddias> note, you might not need that anymore
<daviddias> new IPFS is way smarter
<AphelionZ> interesting!
<daviddias> it won't try to overload your repo if it is already there
<daviddias> it is analogous to `ipfs daemon --init`
<AphelionZ> even in the browser?
<daviddias> even in the browser
<AphelionZ> hot damn!
<AphelionZ> let me test that with the new version and we'll see if this is still required
<kevina> whyrusleeping: around?
<tidux> ><kevina> why r u sleeping around?
<tidux> kek
<daviddias> AphelionZ: :D
<kevina> tidux: ????
<daviddias> AphelionZ: check all the other goodies here https://github.com/ipfs/js-ipfs/issues/795
<tidux> irc nick joke
<kevina> he chose the nick name not me :)
Oatmeal has quit [Remote host closed the connection]
<dignifiedquire> mikeal: github.com/ipfs/js-ipfs-unixfs-engine
Oatmeal has joined #ipfs
dimitarvp` is now known as dimitarvp
maxlath has quit [Ping timeout: 240 seconds]
ashark has joined #ipfs
<kevina> whyrusleeping: just wanted your feedback on https://github.com/ipfs/go-ipfs/pull/3743, be back online in an hour or so
matoro has quit [Ping timeout: 264 seconds]
ashark has quit [Ping timeout: 240 seconds]
matoro has joined #ipfs
Qwazerty has quit [Quit: WeeChat 1.6]