<dryajov>
whyrusleeping: sorry I don’t think I ever answered your question on monorepos :D. I guess you're asking whats the monorepo for in the case of the js ipfs projects? I personally don’t like the idea of the monorepos, pretty much because of the same reasons discussed here - https://github.com/ipfs/community/issues/174, but I definitelly feel the pain of
<dryajov>
have a monorepo. I was wondering if it could work with git subtrees/sub modules, I guess there is no reason why it couldn’t, and it might make life a bit easier… victorbjelkholm pointed to a repo where he uses a similar tool, but I’d be curious to see how lerna handles it with subtrees/submodules… Botom line, jut trying to figure out what the best
<dryajov>
working with the gazilion modules in the js ipfs repos. Nothing wrong with how they are split, it just becomes a bit cumbersome to manage after a certain point. There should be an easy way of versioning/updating/releasing/managing packages across several repos. Lerna (https://github.com/lerna/lerna) seems to do just that, but they seem to assume that you
<whyrusleeping>
dryajov: ah!
<whyrusleeping>
gotcha
<whyrusleeping>
On the go side i've been working on a tool called gx-workspace
<whyrusleeping>
(well, a subcommand for the tool anyways)
Guest187693[m] has joined #ipfs
<dryajov>
whyrusleeping: ah, that looks really cool… yeah thats pretty much what I’m thinking of
<whyrusleeping>
When you need to update a package, you start a new update process and list the packages you want updated
<whyrusleeping>
and then you run 'gx-workspace update next'
<whyrusleeping>
and it iterates through everything it needs to update in order and does the updates (allowing you to stop and check things at any point)
<whyrusleeping>
it still needs a lot of UX work though
<whyrusleeping>
so i'im interested in what youre doing
spacebar_ has quit [Quit: spacebar_ pressed ESC]
<dryajov>
whyrusleeping: updating a versions in package.json can become a real pain… specially when its a package being used by everything else, like multiaddr for example… it be nice to have something that tracks the version and can update deps, version and release in bulk
<whyrusleeping>
Hah
<whyrusleeping>
Yeah, thats my current hell hole
<dryajov>
curious if you run into the same issues in the go side?
<Mateon1>
`elm-package` will bump versions for you, automatically enforcing these rules
<Mateon1>
I wish more langs had that
<Mateon1>
Also, elm-package diff for seeing changes in APIs
<dryajov>
whyrusleeping: I saw that you did some work with clearcase at some point :), I had to work with it back in 2005, over a VPN from Costa Rica, I can only tell you, it was not fun :D I think there were separate versions for LAN and WAN for wathever that meant, we were on the LAN VPN from central america to marryland… oh boy...
<whyrusleeping>
oh god
<whyrusleeping>
i thought i could forget about that dark chapter
<dryajov>
lol, sorry for bring that up :D
<dryajov>
your one of the few people I’ve seen that had to deal with it...
<whyrusleeping>
the company i worked for was trying to migrate to git
<whyrusleeping>
and I had to maintain a bidirectional bridge between git and clearcase
<whyrusleeping>
because some developers refused to use git
<dryajov>
lol… yeah I used to commit and go home, come back the next day and it would still be commiting, I was commit 2 or 3 files at a time :D
<dryajov>
oh lord
<dryajov>
yeah… git brought up some resistance… specially with the enterprice folks
<whyrusleeping>
yeah, they said it was hard to use
<whyrusleeping>
while using clearcase...
* whyrusleeping
had to stop and think about that for a while
<dryajov>
haha… yeah
<dryajov>
the only thing I can kinda rescue was the windows explorer integration… nothing else did it as well as they did it back in then…
<whyrusleeping>
True
<dryajov>
yeah if you used their client or the command line, then god have mercy on you
<dryajov>
:D
<dryajov>
lol to the video :D
matoro has quit [Ping timeout: 260 seconds]
matoro has joined #ipfs
spacebar_ has joined #ipfs
Guest182011[m] has joined #ipfs
M-sol56 has joined #ipfs
robattila256 has quit [Quit: WeeChat 1.7]
cemerick has quit [Ping timeout: 246 seconds]
cemerick has joined #ipfs
reit has joined #ipfs
dmr has joined #ipfs
dmr has quit [Remote host closed the connection]
dmr has joined #ipfs
dmr has quit [Changing host]
dmr has joined #ipfs
athan has joined #ipfs
MrControll has quit [Quit: Leaving]
dmr has quit [Ping timeout: 264 seconds]
dawny has quit [Read error: Connection reset by peer]
IRCFrEAK has joined #ipfs
IRCFrEAK has quit [Remote host closed the connection]
bwn has quit [Ping timeout: 240 seconds]
shizy has joined #ipfs
spacebar_ has quit [Quit: spacebar_ pressed ESC]
IRCFrEAK has joined #ipfs
IRCFrEAK has quit [K-Lined]
IRCFrEAK has joined #ipfs
Shatter has joined #ipfs
IRCFrEAK has left #ipfs [#ipfs]
realisation has joined #ipfs
IRCFrEAK has joined #ipfs
IRCFrEAK has quit [K-Lined]
realisation has quit [Max SendQ exceeded]
tmg has quit [Ping timeout: 260 seconds]
Allonphone has joined #ipfs
bwn has joined #ipfs
IRCFrEAK has joined #ipfs
Allonphone has quit [Client Quit]
IRCFrEAK has left #ipfs [#ipfs]
Akaibu has joined #ipfs
horrified has quit [Ping timeout: 246 seconds]
Akaibu has quit []
Akaibu has joined #ipfs
IRCFrEAK has joined #ipfs
Akaibu has quit [Client Quit]
shizy has quit [Ping timeout: 240 seconds]
Akaibu has joined #ipfs
mguentner has quit [Quit: WeeChat 1.7]
IRCFrEAK has left #ipfs [#ipfs]
mbags has quit [Quit: Leaving]
mguentner has joined #ipfs
horrified has joined #ipfs
zabirauf_ has quit [Ping timeout: 240 seconds]
<whyrusleeping>
If anyone has a vps (or other machine with a public IP and a fast pipe) that has some spare RAM, running https://github.com/ipfs/dht-node will help reduce the overall bandwidth load on the network, and also generally improve performance
edrex has quit [Remote host closed the connection]
<lemmi>
whyrusleeping: how hungry is that tool?
<lemmi>
i have tons of bandwidth to spare but not necessarily ram and cpu
<whyrusleeping>
lemmi: its not super CPU hungry, but it wants about 80MB of ram
<lemmi>
that's ok
DiCE1904 has quit [Read error: Connection reset by peer]
<whyrusleeping>
I'm running a single node and its using 61MB
<whyrusleeping>
and then i'm also running with -many=10 (run ten nodes) and its taking between 140MB and 280MB
<whyrusleeping>
it seems that running multiple nodes is more efficient for some reason
<whyrusleeping>
It outputs its memory usage too, so you can tweak it
<lemmi>
alright. i'll give it a try
<whyrusleeping>
sweet, thanks :)
IRCFrEAK has joined #ipfs
<whyrusleeping>
Once we get better transports (like QUIC) implemented, everything will take a lot less memory
Akaibu has quit [Quit: Connection closed for inactivity]
palkeo has quit [Quit: Konversation terminated!]
arkimedes has quit [Ping timeout: 240 seconds]
IRCFrEAK has quit [Ping timeout: 240 seconds]
rendar has joined #ipfs
mentos1386 has joined #ipfs
cemerick has quit [Ping timeout: 246 seconds]
mentos1386 has quit [Ping timeout: 240 seconds]
horrified has quit [Quit: brb weechat memory leak]
horrified has joined #ipfs
mentos1386 has joined #ipfs
mildred2 has quit [Read error: Connection reset by peer]
mildred2 has joined #ipfs
mentos1386 has quit [Quit: mentos1386]
Caterpillar has joined #ipfs
ylp has joined #ipfs
inetic has joined #ipfs
s_kunk has quit [Ping timeout: 246 seconds]
espadrine has joined #ipfs
A124 has quit [Quit: '']
A124 has joined #ipfs
A124 has quit [Client Quit]
cxl000 has joined #ipfs
A124 has joined #ipfs
A124 has quit [Client Quit]
A124 has joined #ipfs
athan has quit [Ping timeout: 246 seconds]
athan has joined #ipfs
arpl has joined #ipfs
s_kunk has joined #ipfs
gmcabrita has joined #ipfs
ShalokShalom has quit [Remote host closed the connection]
[BT]Brendan is now known as brendyn
s_kunk has quit [Read error: Connection reset by peer]
s_kunk has joined #ipfs
thomersch has joined #ipfs
ecloud_wfh is now known as ecloud
thomersch has quit [Client Quit]
M386dxturbo[m] has joined #ipfs
gde33 has quit [Remote host closed the connection]
gde33 has joined #ipfs
tmg has joined #ipfs
_rht has joined #ipfs
gmoro has joined #ipfs
Dunkhan has joined #ipfs
mildred2 has quit [Ping timeout: 246 seconds]
Boomerang has joined #ipfs
rcat has joined #ipfs
<Bloo[m]>
Is there anything like IPFS (or does IPFS even do this?) that would protect peers while downloading files?
<Bloo[m]>
for example plausable deniability for the peers downloading files, like in the case of DMCA torrent trolls
<KheOps>
I don't know if IPFS does it, but one way of doing it would be to make sure that people seeed a bit of everything, including things they have not asked
<KheOps>
So that the fact your seeding or downloading something does not necessarily implied that you deliberately requested it
<r0kk3rz>
KheOps: i dont think that would help, actually it would make it worse
<KheOps>
That's what Freenet does, except with Freenet you cannot easily know which blocks you are hosting. But you're taking part in storing things that you don't want, knowingly, since you're using Freenet
Boomerang has quit [Quit: Lost terminal]
<Bloo[m]>
Do you think there will be any plans for this moving forward?
<Bloo[m]>
I think it could end up being a pretty important feature to prevent people from being scared to look at sensitive documents for example
<KheOps>
No idea, it's a really tricky question :) And what I suggested may or may not be considered safe, depending on how the adversary sees things
thomersch has joined #ipfs
<r0kk3rz>
Bloo[m]: i havent seen any plans about such things. at the moment using findprovs to identify seeders of content is trivial
<Bloo[m]>
KheOps: Indeed... Not sure what a viable solution to such an issue would be other than something like making people use Tor but for some files that would be unfair on the Tor network
kthnnlg has quit [Remote host closed the connection]
iav_ has quit []
keith_analog has joined #ipfs
<keith_analog>
Hi All, After upgrading from version 0.4.6 to 0.4.7, I am again having a problem with committing large files. I am working on a nixos linux machine. The problem is related to a limit on # of open file handles. However, I had fixed this problem previously, and it should be fine now. Here's the output "
<keith_analog>
ipfs add -r pbbs-pctl-data/
<keith_analog>
64.00 MB / 52.40 GB [>-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 0.12% 14m2s17:52:29.679 ERROR commands/h: open /external-drive/ipfs-repo/blocks/OQ: too many open files client.go:247
<keith_analog>
Error: open /external-drive/ipfs-repo/blocks/OQ: too many open files
<keith_analog>
When I check the max nbr of open files allowed on my system, I get:
<keith_analog>
$ cat /proc/sys/fs/file-max
<keith_analog>
1623445
<keith_analog>
Any idea what the problem might be?
anewuser has joined #ipfs
ianopolous has quit [Ping timeout: 240 seconds]
<Kubuxu>
keith_analog: yeah, it is know our fix to DHT caused more connections to be established, run ipfs daemon with `IPFS_FD_MAX=4096` env var
cemerick has joined #ipfs
<Kubuxu>
or manually `ulimit -n 4096; ipfs daemon --manage-fdlimit=false`
<lemmi>
whyrusleeping: i took the liberty and replaced leveldb with a map. i can't tell the difference memorywise, but performance is better
<whyrusleeping>
lemmi: on dht-node?
<lemmi>
whyrusleeping: yep
<whyrusleeping>
ah, nice
<whyrusleeping>
I was afraid doing that would destroy memory usage
<lemmi>
maybe worth a cmdline option
<whyrusleeping>
yeah, i was just gonna suggest that
<lemmi>
whyrusleeping: running 1.5h with 450 and 20k records @ 50-70mb memory usage (golang gc seems to run very frequently though)
<whyrusleeping>
Yeah, we have a lot of work to do on allocating less
<whyrusleeping>
memory volatility can be expensive
<dryajov>
if anyone is interested in jumping the circuit discussion to provide feedback/ideas its currently going on here - https://github.com/libp2p/js-libp2p-circuit/issues/4, I’d really apretiate some feedback review to make sure we’re on the right track
seagreen_ has quit [Ping timeout: 260 seconds]
arpu has quit [Ping timeout: 268 seconds]
sprint-helper has joined #ipfs
sprint-helper1 has quit [Read error: Connection reset by peer]
keith_analog has quit [Quit: Konversation terminated!]
dtz has joined #ipfs
hoenir_ has quit [Remote host closed the connection]
subtrakt_ has quit [Remote host closed the connection]
matoro has joined #ipfs
subtraktion has quit [Ping timeout: 256 seconds]
spossiba has quit [Quit: Lost terminal]
spossiba has joined #ipfs
matoro has quit [Remote host closed the connection]
nu11p7r has quit [Ping timeout: 240 seconds]
realisation has joined #ipfs
blacczenith has quit [Remote host closed the connection]
s_kunk has joined #ipfs
leeola has joined #ipfs
eater has quit [Ping timeout: 258 seconds]
eater has joined #ipfs
matoro has joined #ipfs
warner` has joined #ipfs
warner has quit [Ping timeout: 264 seconds]
undiscerning has quit [Ping timeout: 260 seconds]
hornfels has joined #ipfs
bwn has quit [Ping timeout: 260 seconds]
arkimedes has quit [Quit: Leaving]
warner` is now known as warner
<lemmi>
whyrusleeping: so i don't have any insight as to how the dht node works, but it looks like something is causing greater than linear growth for the latency wrt to records
matoro has quit [Ping timeout: 240 seconds]
<lemmi>
i get sub ms latencies with ~1000 records and i'm over 120ms at 10k
cemerick has quit [Ping timeout: 246 seconds]
bwn has joined #ipfs
matoro has joined #ipfs
bwn has quit [Ping timeout: 240 seconds]
matoro has quit [Ping timeout: 240 seconds]
<whyrusleeping>
lemmi: is this with the in memory datastore?
<whyrusleeping>
lemmi: my node still using leveldb has 115,000 records and 500us latency
<whyrusleeping>
120ms is definitely too high
<whyrusleeping>
lemmi: is it swapping?
<lemmi>
whyrusleeping: no swapping without leveldb. i'm trying different GOGC values just for fun.
<whyrusleeping>
lemmi: heh, alright.
<whyrusleeping>
I wonder if its lock contention around the datastore
gigq has quit [Read error: Connection reset by peer]
gigq has joined #ipfs
guest2403 has joined #ipfs
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
bwn has joined #ipfs
Guest170096[m] has joined #ipfs
ashark has quit [Ping timeout: 260 seconds]
matoro has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<AphelionZ>
daviddias: sorry about my PR - I want to commit it and i'm actually sitting down to work on it as we speak
<AphelionZ>
I have a couple questions if you have a sec
<AphelionZ>
or of dignifiedquire
<guest2403>
Hi, i've got a quick and simple question too:
<guest2403>
If I publish using the same ipfs hash and use the exact same key, i get different ipns hashes as a result. Though both ipns resolve to the same ipfs.
<guest2403>
Is it because the peer id is added? In other words, it's not possible to update an ipns hash from different daemons?