<whyrusleeping>
SerkanDevel[m]: yeah, though it feels like the author of that post doesnt entirely understand the attack
<whyrusleeping>
in order to prevent nodes from finding a single hash, you have to control the K (20) closest peers to the hash in question.
<whyrusleeping>
and also ensure that you always control the K closest nodes to that hash for the duration of the attack
pomegranatedaddy has joined #ipfs
<whyrusleeping>
that would prevent nodes from finding who has the hash, but if you randomly connected (during the DHT crawl) to the peer who actually has the content, you would be able to get the content youre looking for from the
<whyrusleeping>
m
<whyrusleeping>
It would also be fairly easy to detect such an attack
pomegranatedaddy has quit [Ping timeout: 240 seconds]
erictapen has joined #ipfs
erictapen has quit [Read error: Connection reset by peer]
<Icefoz_>
It would be awesome to carry out such an attack, make tools to detect it, and demonstrate it.
erictapen has joined #ipfs
ashark has joined #ipfs
ashark_ has quit [Ping timeout: 248 seconds]
<Icefoz_>
Also, I updated from 0.4.10 to 0.4.11 and responsiveness on a long-running daemon is still not perfect but is definitely improved :D
<Icefoz_>
I might argue for making deb/rpm/brew package repos instead of an ipfs-update program, but I guess that might not help Windows users.
<whyrusleeping>
Icefoz_: glad to hear 0.4.11 makes things better :)
<whyrusleeping>
0.4.12 should also help significantly when we release that
<Icefoz_>
Don't make it too good, I might lose the desire to help make it better if it's already good enough!
<whyrusleeping>
Icefoz_: hah, don't worry, there will still be plenty to do :)
<Icefoz_>
Oh good.
zippy314` has joined #ipfs
Jesin has joined #ipfs
LelandRolofson[m has left #ipfs ["User left"]
erictapen has quit [Read error: Connection reset by peer]
paulith[m] has joined #ipfs
<zippy314`>
hey folks, looking for someone who can help me navigate the depths of libp2p and the backoff errors I get intermitently and how to handle them. Currently I see that the backoff count is set to 1. So it seems to me that there must be some expectation of what to do clientside when I get that, but I can't figure it out. When I start to have many nodes talking to eachother, one of them will randomly throw this backoff
<zippy314`>
error. What is the expectation architecturaly about how this should be handled?
inetic has quit [Ping timeout: 255 seconds]
<whyrusleeping>
zippy314`: the backoff errors occur when your node fails to dial a given peer too many times in a certain period of time
<whyrusleeping>
basically, if you get that error, it means the peer youre trying to connect to is not accessible, and you have tried connecting to it too many times
<zippy314`>
Yah, Ive been reading that code. Currently dialAttempts is set to 1, with a comment about " _too many dials_ atm"
erictapen has joined #ipfs
erictapen has quit [Remote host closed the connection]
erictapen has joined #ipfs
<zippy314`>
interestingly dialAttempts isn't even used, but at line 125 backoffPeer.tries is set to a hardcoded value of 1.
<zippy314`>
Oh I get it, that is just the initialization....
<zippy314`>
So here's the issue: almost all of the errors I'm seeing that trigger the backoff (logged at line #206) are "cancel" errors, i.e. they appear to be local timeouts. If I put an if statement in there to skip the backoff call and just return the error, then my code seems to work just fine. I find this mystifying.
erictapen has quit [Ping timeout: 248 seconds]
<whyrusleeping>
yeah, dialAttempts isnt relevant
<whyrusleeping>
hrm... what is your code trying to do?
pomegranatedaddy has joined #ipfs
arpu has quit [Ping timeout: 252 seconds]
<zippy314`>
I'm working on holochain's gossip protocol's and also testing out our kademlia modifications..
<whyrusleeping>
hrm...
<zippy314`>
As soon as I throw up a bunch of nodes doing lots of communication, I get random failures due to backoff, that just clobber everything and it can't recover.
<whyrusleeping>
who is calling "dial" ?
<zippy314`>
I presume that I'm not interpreting the situation correctly and handling..
<zippy314`>
In this particular case one of the kademlia querry workers, but in our gossip stuff just some other user of the RoutedHost object.
pomegranatedaddy has quit [Ping timeout: 240 seconds]
kiboneu_ is now known as kiboneu
<zippy314`>
Yep, just reconfirmed it. If I surround the AddBackoff() call at line 207 with a protector:
<zippy314`>
if err.Error() != "context canceled" {
Alpha64 has joined #ipfs
<zippy314`>
Then my code works just fine. It seems to me that perhaps there are spurious canceling of the contexts that don't actually represet connection errors, but are just workers being cleaned up before they have completed by the kademlia query. Does this seem possible?
ilyaigpetrov has joined #ipfs
lachenmayer has quit [Ping timeout: 240 seconds]
alanz has quit [Ping timeout: 240 seconds]
arpu has joined #ipfs
ianopolous_ has joined #ipfs
}ls{ has quit [Ping timeout: 248 seconds]
ccii has quit [Ping timeout: 248 seconds]
alanz has joined #ipfs
}ls{ has joined #ipfs
lachenmayer has joined #ipfs
ccii has joined #ipfs
jungly has quit [Remote host closed the connection]
Pixi_ has quit [Quit: Pixi_]
Xiti has joined #ipfs
toppler has quit [Remote host closed the connection]
mikedd has quit [Quit: Connection closed for inactivity]
dhruvbaldawa has joined #ipfs
joocain2 has quit [Ping timeout: 248 seconds]
dhruvbaldawa has quit [Ping timeout: 255 seconds]
joocain2 has joined #ipfs
atrapado_ has joined #ipfs
talonz has quit [Ping timeout: 248 seconds]
pomegranatedaddy has joined #ipfs
ivo_ has joined #ipfs
pcctw has joined #ipfs
pomegranatedaddy has quit [Ping timeout: 248 seconds]
ivo_ has quit [Remote host closed the connection]
erictapen has joined #ipfs
erictapen has quit [Remote host closed the connection]
erictapen has joined #ipfs
pcctw has quit [Remote host closed the connection]
pcctw has joined #ipfs
pcctw has quit [Client Quit]
<gkbrk>
I am using the pubsub experiment, and there is one issue. When the peers don't send messages for a while, and then start sending again they don't see each others messages
<gkbrk>
you need to restart both pub commands around the same time, sometimes even restart the daemon
pcctw has joined #ipfs
<gkbrk>
are these expected and just happening until we replace floodpub?
<whyrusleeping>
gkbrk: no, that feels like a bug
genr8r[m] has joined #ipfs
Jesin has quit [Quit: Leaving]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
Jesin has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
zippy314` has quit [Ping timeout: 260 seconds]
Jesin has quit [Quit: Leaving]
gkbrk has quit [Quit: Leaving]
kiboneu is now known as crodad
pomegranatedaddy has joined #ipfs
Jesin has joined #ipfs
pomegranatedaddy has quit [Ping timeout: 240 seconds]
droman has joined #ipfs
eater has quit [Ping timeout: 240 seconds]
eaterof has joined #ipfs
eaterof is now known as eater
rgrau has joined #ipfs
eater has quit [Ping timeout: 258 seconds]
eater has joined #ipfs
Tootoot222 has joined #ipfs
kewde[m] has quit [Ping timeout: 246 seconds]
crodad is now known as crowill
erictapen has quit [Ping timeout: 258 seconds]
erictapen has joined #ipfs
erictapen has quit [Remote host closed the connection]
Caterpillar2 is now known as Caterpillar
erictapen has joined #ipfs
crowill is now known as kiboneu
erictapen has quit [Ping timeout: 240 seconds]
kewde[m] has joined #ipfs
mikedd has joined #ipfs
pomegranatedaddy has joined #ipfs
zippy314 has joined #ipfs
<zippy314>
whyrusleeping: whyrusleeping: I think I can write a test that will show the problem, but I can't figure out how to run the tests for my forked version kad-dht because of gx. Can you please explain the process for doing this? I have a fork, and I've done go get github.com/zippy/go-libp2p-kad-dht and I'm in that subdirectory, and neither `gx test` or `go test` work.
eric has joined #ipfs
eric is now known as Guest52839
Guest52839 is now known as zippy314`
pomegranatedaddy has quit [Ping timeout: 240 seconds]
Vladislav has quit [Remote host closed the connection]
ashark has quit [Ping timeout: 248 seconds]
dgrisham has quit [Quit: WeeChat 1.7.1]
ianopolous_ has joined #ipfs
binarycat has quit [Quit: binarycat]
onabreak has joined #ipfs
dgrisham has joined #ipfs
dhruvbaldawa has joined #ipfs
jkilpatr has joined #ipfs
dhruvbaldawa has quit [Ping timeout: 240 seconds]
espadrine has joined #ipfs
<zippy314>
\q
<zippy314>
\exit
zippy314 has quit [Quit: WeeChat 1.4]
xnbya has quit [Ping timeout: 240 seconds]
pomegranatedaddy has joined #ipfs
kaotisk-irc has joined #ipfs
kaotisk has quit [Read error: Connection reset by peer]
pomegranatedaddy has quit [Ping timeout: 240 seconds]
zippy314` has left #ipfs ["Killed buffer"]
Ekho has quit [Remote host closed the connection]
Taoki has quit [Ping timeout: 240 seconds]
xnbya has joined #ipfs
erictapen has joined #ipfs
Taoki has joined #ipfs
Ekho has joined #ipfs
zippy314 has joined #ipfs
xMajedz[m] has joined #ipfs
ashark has joined #ipfs
gkbrk has joined #ipfs
<gkbrk>
Does ipfs use the same kademlia DHT as bittorrent?
Ekho has quit [Remote host closed the connection]
<gkbrk>
what I mean is do more people with torrent client benefit IPFSers and more IPFSers benefit torrenters?
niggger[m] has joined #ipfs
niggger[m] has left #ipfs [#ipfs]
Ekho has joined #ipfs
<whyrusleeping>
gkbrk: no, they are separate networks
<gkbrk>
would there be any advantage or disadvantage of using the same dht?
<gkbrk>
i guess ipfs putting all the pieces there instead of just the file hashes would make it more noisy than torrents
lidel has quit [Quit: WeeChat 1.9]
atrapado_ has quit [Quit: Leaving]
<whyrusleeping>
plus its different information in different formats with different guarantees and parameters
erictapen has quit [Ping timeout: 240 seconds]
Jesin has quit [Quit: Leaving]
cdrappier has joined #ipfs
erictapen has joined #ipfs
<cdrappier>
Hi folks! is there a good way to discover content that is on the ipfs network?
stoopkid has quit [Quit: Connection closed for inactivity]
kaotisk-irc has quit [Ping timeout: 240 seconds]
pomegranatedaddy has joined #ipfs
pomegranatedaddy has quit [Ping timeout: 248 seconds]
cdrappier has quit [Quit: Page closed]
zippy314 has quit [Remote host closed the connection]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
zippy314 has joined #ipfs
kaotisk has joined #ipfs
lidel has joined #ipfs
stoopkid has joined #ipfs
ajsdallas has joined #ipfs
lidel has quit [Quit: WeeChat 1.9.1]
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
lidel has joined #ipfs
ashark has quit [Ping timeout: 240 seconds]
rgrau has quit [Ping timeout: 252 seconds]
yondon has joined #ipfs
<yondon>
I have a Go application that is using libp2p for the network protocol and currently trying to embed an IPFS node into the application as well. Works fine, when go-ipfs is not vendored, but when I vendor go-ipfs, I'm running into these errors
<yondon>
2017/10/18 18:24:54 proto: duplicate proto type registered: relay.pb.CircuitRelay 2017/10/18 18:24:54 proto: duplicate proto type registered: relay.pb.CircuitRelay.Peer panic: proto: duplicate enum registered: relay.pb.CircuitRelay_Status
<yondon>
Any thoughts? Maybe differing versions of gx packages? goroutine 1 [running]: github.com/livepeer/go-livepeer/vendor/gx/ipfs/QmZ4Qi3GaRbjcx28Sme5eMH7RQjGkt8wHxt2a65oLaeFEV/gogo-protobuf/proto.RegisterEnum(0x4bb31a1, 0x1c, 0xc4204ce300, 0xc4204ce330) /Users/yondonfu/Development/go/src/github.com/livepeer/go-livepeer/vendor/gx/ipfs/QmZ4Qi3GaRbjcx28Sme5eMH7RQjGkt8wHxt2a65oLaeFEV/gogo-protobuf/proto/properties.go:876 +0x2ac github.com/liv
talonz has joined #ipfs
pomegranatedaddy has joined #ipfs
erictapen has quit [Ping timeout: 248 seconds]
cxl000 has quit [Quit: Leaving]
droman has quit [Quit: WeeChat 1.9.1]
pomegranatedaddy has quit [Ping timeout: 240 seconds]
erictapen has joined #ipfs
<whyrusleeping>
yondon: oh yeah, the protobuf differences are annoying
<whyrusleeping>
yondon: do you have the code somewhere that i can take a look?
<whyrusleeping>
yondon: and what tool are you using for vendoring?
<yondon>
whyrusleeping: I've been trying govendor
<whyrusleeping>
mmkay, i'll give it a shot, cloning now
<yondon>
whyrusleeping: I've also tried just copying go-ipfs into the vendor directory and the relevant gx files into vendor/gx/ipfs
<whyrusleeping>
yondon: i think i got it
<whyrusleeping>
i did `cp $GOPATH/src/github.com/ipfs/go-ipfs vendor/github.com/ipfs/`
<whyrusleeping>
then i copied the package.json from go-ipfs into the livepeer directory and ran `gx install --local`
<whyrusleeping>
seems to have worked
<yondon>
whyrusleeping: hm ok I'll give that a shot
<whyrusleeping>
doing `gx init` and `gx import --local github.com/ipfs/go-ipfs` should have roughly the same effect, but then all the go-ipfs code will be in vendor/gx/ipfs/... instead of vendor/github.com/ipfs/go-ipfs
<whyrusleeping>
also, with the copying package.json thing, you can delete the package.json after running the gx install. its just a hack
<yondon>
whyrusleeping: So doing all that builds the binary, but when I execute the binary I still get `2017/10/18 19:00:26 proto: duplicate proto type registered: relay.pb.CircuitRelay 2017/10/18 19:00:26 proto: duplicate proto type registered: relay.pb.CircuitRelay.Peer panic: proto: duplicate enum registered: relay.pb.CircuitRelay_Status`
* whyrusleeping
runs the binary
* whyrusleeping
scratches his chin
* whyrusleeping
has an idea
shizy has quit [Ping timeout: 264 seconds]
<whyrusleeping>
yondon: got it
<whyrusleeping>
the issue is that go-livepeer-basicnet depends on different versions of the code
jonnycrunch1 has joined #ipfs
<whyrusleeping>
so go-ipfs as youre importing it depends on version X of the circuit relay code, and go-livepeer-basicnet depends on version Y
<lgierth>
that's the protobuf version of go type mismatches i guess :)
espadrine has quit [Ping timeout: 248 seconds]
<whyrusleeping>
yondon: want me to PR an update to go-livepeer-basicnet?
Jesin has joined #ipfs
<yondon>
whyrusleeping: ah I suspected there was some weird versioning issues...
<yondon>
whyrusleeping: if you have time time that would be great!
<whyrusleeping>
sure thing! i'm already most of the way done with the fixes while debugging this
<yondon>
whyrusleeping: awesome, thanks!
<whyrusleeping>
"other people having issues with gx" generally is my personal highest priority
mikedd has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
yondon: it seems that github.com/livepeer/go-livepeer/types doesnt exist
<whyrusleeping>
but go-livepeer-basicnet depends on it in one of its tests
<whyrusleeping>
fine to ignore?
<yondon>
whyrusleeping: hm yeah fine to ignore - I'll fix that later
slayerjain has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
cool, this oughta do it
<yondon>
sweet, giving it a try now
<yondon>
whyrusleeping: that did the trick! I'll merge in your PR. Thanks again for your help!
<whyrusleeping>
woo!
<whyrusleeping>
yondon: really interesting work on livepeer
<whyrusleeping>
feel free to poke me or the others if ipfs is ever giving you a hard time
jonnycrunch1 has quit [Ping timeout: 252 seconds]
rodolf0 has quit [Ping timeout: 252 seconds]
<yondon>
whyrusleeping: will do! we've been building out the libp2p layer for video and just added the ipfs integration for one of the features of our protocol - we'll have more to show soon!
<whyrusleeping>
:D
<whyrusleeping>
you'll have to give a demo on the monday all hands call when you get something up and running
wking has quit [Ping timeout: 255 seconds]
<yondon>
for sure. will any of you guys be around for devcon?
pomegranatedaddy has joined #ipfs
<whyrusleeping>
i'm gonna try to
<whyrusleeping>
and i think one or two others might be as well
<whyrusleeping>
I think i have a hotel booked, but no plane tickets or devcon tickets yet
<yondon>
cool. we'll be around for that, so hope to see you then
pomegranatedaddy has quit [Ping timeout: 260 seconds]