<dignifiedquire>
that is the best solution I currently know about
<dryajov>
dignifiedquire: sweet, thanks
<dignifiedquire>
dryajov: if you find time, adding this as a helper to multiaddr would be great
<dryajov>
yeah… thats a handy one
cemerick has quit [Ping timeout: 246 seconds]
<dryajov>
lgierth ^^^ any thoughts
<dryajov>
on the address two address formats?
matoro has quit [Ping timeout: 240 seconds]
matoro has joined #ipfs
pawn has quit [Remote host closed the connection]
ygrek has joined #ipfs
wallacoloo_____ has joined #ipfs
SchrodingersScat has quit [Quit: WeeChat 1.4]
pawn has joined #ipfs
SchrodingersScat has joined #ipfs
SchrodingersScat has quit [Quit: WeeChat 1.4]
<dryajov>
lgierth: I might be reading the spec incorrectly, but to me it reads as two addresses are being sent when the multistream is being negotiated… that doesn’t make a lot of sense tho… IMO, when the source peer connects to the relay, it sends the destination peer multiaddr, and when the relay connects to the dest peer and can look up the src peer addresses
<dryajov>
from its connection and send it as part of the initial connection…
matoro has quit [Ping timeout: 260 seconds]
<dryajov>
lgierth: right now, the spec makes me think that the src peer will send both addresses, its own and the dest, which doesn’t make sense, hense the question
SchrodingersScat has joined #ipfs
rcat has quit [Ping timeout: 268 seconds]
grish has joined #ipfs
grish has quit [Client Quit]
dgrisham has joined #ipfs
aquentson has quit [Ping timeout: 264 seconds]
matoro has joined #ipfs
john4 has joined #ipfs
pawn has quit [Remote host closed the connection]
infinity0 has quit [Ping timeout: 260 seconds]
tgp_ has joined #ipfs
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
maxlath has quit [Quit: maxlath]
infinity0 has joined #ipfs
Oatmeal has quit [Ping timeout: 260 seconds]
dgrisham has quit [Quit: WeeChat 1.7]
HostFat_ has joined #ipfs
chris6131 has left #ipfs [#ipfs]
HostFat has quit [Ping timeout: 260 seconds]
aquentson has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 260 seconds]
matoro has quit [Ping timeout: 240 seconds]
matoro has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
Nukien has quit [Ping timeout: 245 seconds]
mguentner has quit [Quit: WeeChat 1.7]
mguentner has joined #ipfs
Nukien has joined #ipfs
ygrek has quit [Ping timeout: 268 seconds]
mguentner2 has joined #ipfs
mguentner has quit [Ping timeout: 240 seconds]
gmcabrita has quit [Quit: Connection closed for inactivity]
pfrazee has quit [Remote host closed the connection]
jedahan has joined #ipfs
realisation has joined #ipfs
zabirauf_ has quit [Remote host closed the connection]
zabirauf has joined #ipfs
zabirauf has quit [Remote host closed the connection]
zabirauf has joined #ipfs
Aranjedeath has joined #ipfs
pawn has joined #ipfs
<pawn>
How do I search the ipfs network?
<pawn>
How do I search the dat network for that matter too?
<pawn>
What is the difference between dat and ipfs?
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
ShalokShalom_ has quit [Remote host closed the connection]
ShalokShalom has joined #ipfs
rcat has joined #ipfs
ShalokShalom has quit [Remote host closed the connection]
ShalokShalom has joined #ipfs
ShalokShalom has quit [Read error: Connection reset by peer]
ShalokShalom has joined #ipfs
AkhILman has joined #ipfs
espadrine has joined #ipfs
ShalokShalom has quit [Read error: Connection reset by peer]
ShalokShalom has joined #ipfs
ShalokShalom has quit [Remote host closed the connection]
ShalokShalom has joined #ipfs
ShalokShalom has quit [Remote host closed the connection]
ShalokShalom has joined #ipfs
gmoro has joined #ipfs
caiogondim has joined #ipfs
ygrek has quit [Ping timeout: 264 seconds]
palkeo has joined #ipfs
gmoro has quit [Remote host closed the connection]
gmoro has joined #ipfs
arpu has quit [Ping timeout: 240 seconds]
riotous[m] has joined #ipfs
gmoro has quit [Remote host closed the connection]
gmoro has joined #ipfs
arpu has joined #ipfs
gmcabrita has joined #ipfs
gmoro has quit [Ping timeout: 260 seconds]
tmg has joined #ipfs
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
aquentson1 has quit [Ping timeout: 260 seconds]
ShalokShalom has quit [Remote host closed the connection]
Caterpillar has joined #ipfs
wallacoloo_____ has quit [Quit: wallacoloo_____]
pcre has joined #ipfs
ShalokShalom has joined #ipfs
espadrine has quit [Ping timeout: 264 seconds]
jkilpatr has quit [Ping timeout: 246 seconds]
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 246 seconds]
espadrine has joined #ipfs
BreakIt has joined #ipfs
<BreakIt>
Anyone online using Hexo for generatin webpages? I have a strange problem with the links generated for "tags" and "archive", they all point to / .
<BreakIt>
Inmy theme/_config.yml the menu items are all set like tags: /tags and the pages is created under sources/
<BreakIt>
Generation is ok, I can access the pages /tags /archives /categories by entering respective name in the adressbar of my browser. But all icons point to "" in the html code.
qbi has joined #ipfs
Boomerang has joined #ipfs
gmoro has joined #ipfs
<Kubuxu>
BreakIt: never used it, we are using Hugo
<Kubuxu>
as it has really nice option of making all links relative
Foxcool has quit [Ping timeout: 256 seconds]
<BreakIt>
Hexo also has the relative_link: true option. That fixed som issues with CSS but crashed my other links.
gmoro has quit [Ping timeout: 240 seconds]
jkilpatr has joined #ipfs
<BreakIt>
I might try Hugo as well, before I get a heartattack over Hexo...
pcre has quit [Ping timeout: 246 seconds]
Foxcool has joined #ipfs
MDead has joined #ipfs
MDude has quit [Ping timeout: 240 seconds]
MDead is now known as MDude
<neuthral>
lol i got a mail from my isp that port 4001 is scanning for ip's like a virus XD
<neuthral>
i had to explain it's not malware
<neuthral>
waiting for reply
<Stskeeps>
neuthral: hetzner.de or a real ISP?
<neuthral>
real isp
<neuthral>
Sonera finland
<Kubuxu>
neuthral: it is a bit of known issue, we try to connect with many addresses which sometimes by some ISPs triggers scan detects
<Stskeeps>
Kubuxu: any ways to make it similar in aggressivity/pattern like let's say, uTorrent DHT? it seems a little stronger than most things
<kthnnlg>
Hi All, I have an ipfs daemon running at id QmVf3VbQin1RPL36GVBKiWtxyVVgSxV1Pbv9jShQaj7pdp. However, I can't seem to connect to the machine, even though the daemon is running with no obvious error status. Could someone see if my node is being recognized by ipns? thanks
<Kubuxu>
kthnnlg: how are you trying to connect to it?
<Kubuxu>
it shouldn't but worth a try to republish
<Kubuxu>
also try: can you try `ipfs ping $ID`
<kthnnlg>
that fails as well
<kthnnlg>
even ipfs ping on the same machine always fails for me
<Kubuxu>
one the same machine it should fail
<Kubuxu>
hmm
<Kubuxu>
how many peers do you have `ipfs swarm peers`?
<kthnnlg>
i see 21
<kthnnlg>
oh wait... now it works
<Kubuxu>
weird
<kthnnlg>
strange... it seems like in the past something similar happened
<kthnnlg>
i wait for > 30 minutes
<kthnnlg>
then i run two commands
<kthnnlg>
ipfs swarm peers
<kthnnlg>
and ipfs name publish ...
<kthnnlg>
that's the only thing that's changed, except for time passing
<kthnnlg>
if i restart ipfs, say by rebooting the machine, should the published content remain the same? the files in the folder i'm sharing are pinned
horrified has quit [Quit: quit]
<kthnnlg>
interesting... now when i swarm i see 4 peers
horrified has joined #ipfs
<Kubuxu>
yeah
<kthnnlg>
it's kinda alternating every 4-5 minutes between 4 and 21 peers
<AphelionZ>
lgierth: what is the ws multiaddr for the venus server, for instance
Foxcool has quit [Read error: Connection reset by peer]
Oatmeal has joined #ipfs
gmoro has quit [Ping timeout: 260 seconds]
Foxcool has joined #ipfs
<apiarian>
Kubuxu, whyrusleeping: any chance we could get https://github.com/ipfs/go-ipfs-api/pull/42 merged? (not that it's a big issue, just it's been sitting open for a while and i've had to use my own fork this whole time)
<Kubuxu>
apiarian: I would have merged it already, I am just not sure if whyrusleeping doesn't want to have second look over it.
suttonwilliamd has joined #ipfs
<Kubuxu>
and as he was moving he was quite unavailable.
<Kubuxu>
apiarian: are you able to figure out why the inital kill is hanging there?
<Kubuxu>
are some requests still running when kill is executed?
<Kubuxu>
ipfs diag cmds
<Kubuxu>
if not, take a goroutine dump few seconds after initial kill was initiated if you could
<apiarian>
hmm, i'll try... it'll be a bit tricky to set this up since when I do things from the command line, the kill seems to through just fine and quickly, but when I use the test harness, it hangs. i'll see how I can wire up a diag cmds call in the tests
<apiarian>
probably won't get to it until tomorrow evening or this weekend, though
<Kubuxu>
apiarian: meh, merged the go-ipfs-api PR
<apiarian>
\o/
<Kubuxu>
it was mostly my fault that it wasn't merged for a long time
<kpcyrd>
is travis broken for anybody else? it looks like they messed up csp + cors and now the interface doesn't work correctly
ashark has joined #ipfs
palkeo has quit [Ping timeout: 240 seconds]
<kpcyrd>
csp seems to run in report-only, so it's the cors issue that breaks loading my built details
palkeo has joined #ipfs
Mateon3 has joined #ipfs
<Mateon3>
pawn: Also, regarding your earlier question. ipfs-search.com for searching IPFS, I'm unaware of anything like that for Dat, and for difference between IPFS and Dat: https://github.com/ipfs/faq/issues/119
<pawn>
Mateon3: Thank you! :)
Mateon1 has quit [Ping timeout: 260 seconds]
Mateon3 is now known as Mateon1
palkeo has quit [Ping timeout: 246 seconds]
<pawn>
I have a lot of questions, but it stems from my excitement. First question: If all objects in IPFS are stored using a hash as their "address", would we ever run out of hash space (or what about collisions)?
<pawn>
I know it's unlikely, but is it possible and is it considered?
<r0kk3rz>
pawn: its a multihash so the hash function can be changed in future
<Kubuxu>
pawn: it is "possible" but unlikely, and if ever it becomes a problem we have way to migrate to other hash function/longer function
<pawn>
Is the hash algorithm embedded in the content address?
<Mateon1>
pawn: That's basically how UTF-8 encodes character codepoints
<pawn>
Never knew that.
<pawn>
Learn something everyday I guess
<pawn>
Also learned what in-band and out-of-band means, I think.
<pawn>
Back to IPFS: So files in the network are content-addressed; they're like the IP addresses of IPFS. But what about domain names?
<pawn>
...Is there a DNS alternative in IPFS land?
<Stskeeps>
IPNS-ish - public keys that resolve to hashes
<Stskeeps>
holder of private key can update the value it points to
A124 has joined #ipfs
A124 has quit [Changing host]
A124 has joined #ipfs
<pawn>
Stskeeps: AFAIK, this is so that way an address can stay the same while the content can change without introducing mutability into to the system.
<kythyria[m]>
That's my understanding too.
<kythyria[m]>
It's still hashes because decentralised, resolvable, human readable names are a fairly difficult problem.
<Stskeeps>
yes, kind of like a domain name and a IP address it points to :)
<Stskeeps>
but yeah
<pawn>
The way I understand it, DNS was invented to make addresses human-readable (humanized is the word, I guess?). IPFS would need that as well I assume.
<kythyria[m]>
Human readable and with a layer of indirection.
<pawn>
Indirection?
<Kubuxu>
centralized name authority
Mizzu has joined #ipfs
<pawn>
Ah, right. Yes. I always hated the CA
<pawn>
centralized name authorities*
<pawn>
(I Love Let's Encrypt, but I digress)
<kythyria[m]>
pawn: Indirection in the sense that IPs and domain names aren't inextricably bound together, so you can update an A record without updating everything that might look it up.
<pawn>
kythyria[m]: any ideas out there for a better solution?
<kythyria[m]>
Not really :|
<pawn>
Namecoin?
<kythyria[m]>
Maybe, though I'm wary of stuff duplicated from bitcoin
<pawn>
Isn't namecoin just taking the idea of block-chain tech and applying it to naming things?
<kythyria[m]>
Yeah.
<pawn>
Why are you wary of that?
<pawn>
What's your concern?
<cblgh>
there are examples of how to do a DNS ontop of ethereum
<kythyria[m]>
AIUI bitcoin's design has severe difficulty with not requiring every node to keep a copy of the entire database and history thereof.
<cblgh>
basically you pay a contract to associate a name with your ethereum id (address? not sure about the parlance yet)
<cblgh>
and after that only you can update the associated address
<cblgh>
i presume you have to prove you own the address somehow too? not sure
<cblgh>
oh, thought it was in relation to ethereum & dns
<cblgh>
neat tho
appa has quit [Ping timeout: 260 seconds]
<cblgh>
also i guess a decentralized dns would work kind of like blocklists do today?
<cblgh>
you subscribe to a few that you trust, basically
<cblgh>
i wonder how you handle collisions
<cblgh>
presented with an alternative somehow? or maybe there's a trust/rating mechanism embedded
<r0kk3rz>
it would probably end up more distributed than decentralised, you'd still have a DNS Server but that would connect to the blockchain
<cblgh>
yeah probably
<cblgh>
dns is hard
<cblgh>
like, to find a fundamental decentralized replacement for
<pawn>
What if we didn't have names as a separate system from IPFS. Instead what if we stored names in the files themselves and used a search algorithm to find the files?
Hoosilon has quit [Ping timeout: 255 seconds]
<cblgh>
not sure that i understand what you mean pawn, but you can already search for things by name using e.g. ipfs search
<cblgh>
Sniffs the DHT gossip and indexes file and directory hashes.
<pawn>
In a regular local file system you can have multiple files with the same name (as long as they are separated by directory), and you can search for those files using a search tool on your machine.
<pawn>
DHT gossip (this is a new concept to me)?
<cblgh>
yeah, i'm not sure how it works either
<cblgh>
i got that from the github
<r0kk3rz>
pawn: you know what a DHT is?
<pawn>
distributed hash table = DHT?
<Kubuxu>
we publish DHT entries to be able to locate file chunks when someone wants to access them
ylp has quit [Quit: Leaving.]
<Kubuxu>
so they listen to those messages and discover content using it
<pawn>
What I mean is instead of solving the problem of humanized-naming with another DNS-like system, what if we solved it another way: Name the content by embedding the name within the content. An example would be like a meta element in HTML.
<pawn>
Then, when searching for content with this name, provide a search algorithm within the protocol.
<pawn>
What do you think? Dumb idea?
<cblgh>
hm well
<cblgh>
maybe that could work except with a modification, unless i'm misunderstanding
<cblgh>
ah idk
<pawn>
What modification were you thinking?
<pawn>
(I'm just opening up a discussion about it)
<cblgh>
i was thinking that if i'm searching for "arch linux v 1.2.3.4.5" or something
<cblgh>
there could be a convention, or something else, where you just put that name in a file and hash the file using ipfs
<cblgh>
so when you search for "arch linux v 1.2.3.4.5" you yourself hash that
<cblgh>
get the hash, and look after who else has it
<cblgh>
chances are then that the other peers that have it will also have the containing content, which you could somehow (no idea) ask for
<cblgh>
but yeah, basically connecting the name of what you are looking for to the content itself as a mechanism of finding things
<cblgh>
but the idea has a bunch of holes
<pawn>
I like where you are going with the idea
<pawn>
but sure, the holes would be that who's to say that someone might spoof the content
espadrine has joined #ipfs
<cblgh>
yeah, as well as how do you find the actual content
<cblgh>
like, i guess we can't browse peers in ipfs?
<cblgh>
like one could in dc++, so i've heard
hundchenkatze has joined #ipfs
<pawn>
cblgh: I thought we could?
<cblgh>
pawn: i have no idea
<cblgh>
thus asking haha
<cblgh>
like can i browse all of the stuff you have in your ipfs node easily?
<pawn>
I saw in the alpha demo, Juan was doing a `ipfs id hash-value` on addresses he was connected to
zombu2 has joined #ipfs
<pawn>
cblgh: That I don't know. I was wondering that myself
<pawn>
This brings me to a question I had about security. Do IPFS files have security through obscurity by default or is every file on the IPFS network public?
<cblgh>
i think everything's public
<cblgh>
but it depends on what you mean
<cblgh>
we have many questions it seems eheh
<kythyria[m]>
However, you do need to know the hash to be able to retrieve it
<pawn>
kythyria[m]: hence the security through obscurity
<pawn>
One could implement a layer of encryption somehow I would assume?
<cblgh>
well
<cblgh>
you just encrypt the file
<cblgh>
that's what i do with my binaries in a project that uses ipfs as the transport mechanism
<pawn>
I guess this wouldn't be needed if the files are encrypted end-to-end?
<lgierth>
AphelionZ: /dns4/venus.i.ipfs.io/tcp/443/wss/ipfs/$peerid -- where $peerid is the bootstrap peerid starting in QmSoLv (SoL=solarnet, v=venus)
<xelra>
Guys, I see that ipfs-cluster is around now. What's the state of private networks? I'd really like to start replacing Syncthing and Dropbox with ipfs.
<pawn>
What if there was a way to make hashes more humanized? What I mean is, somehow turn a hash into a visual representation of that hash which could make it easier for us humans to verify a hash. Does something like this exist?
jonnycrunch has quit [Ping timeout: 268 seconds]
jonnycrunch1 is now known as jonnycrunch
<lgierth>
xelra: i think private networks might be in the upcoming 0.4.6 release (Kubuxu knows more)
<AphelionZ>
lgierth: thanks!
<dansup>
pawn, to some degree an identicon serves that purpose
<lgierth>
AphelionZ: same for all of the other bootstrap nodes -- QmSoLu=uranus, j=jupiter, and so on
<lgierth>
the addressing will get a little easier soon -- i just don't have a good idea yet how to make dns lookups in a browser
danielrf1 has quit [Quit: WeeChat 1.7]
<xelra>
lgierth: Thanks. I just saw the PR. Says it will ship in 0.4.7. I'm really excited about this. It's what I've been waiting for. And as long as I'm the only user, the pre-shared key model will work too.
<xelra>
I'll do a lot of testing anyway, before I use it in a multi-user environment. Of course then PSK won't be so nice anymore.
<lgierth>
xelra: ah 0.4.7, good to know :P
<lgierth>
it's going to be soon, probably in march
<lgierth>
the libp2p/specs repo has a spec PR for it too
<pawn>
dansup: Identicons are what's up, however I'm assuming the value-space for an identicon from identicon.net is a bit low.
<pinkieval>
Hi, I get a 404 error for http://multiformats.io/ from my browser (but not curl running in the same network)
<cblgh>
multiformats.io works on my end
<lgierth>
pawn: consistently?
<cblgh>
pinkieval: try a different browser?
<pawn>
lgierth: What?
<lgierth>
eeh sorry pawn, i meant pinkieval
<pawn>
lgierth: Ah no worries
erde74 has quit [Ping timeout: 260 seconds]
<pawn>
I'm going to make breakfast now. I'll talk to you guys later! :)
<lgierth>
i was thinking wait that looks like the star-signal 404 response, but that's not possible
<lgierth>
:):)
<lgierth>
turns out saturn and earth are sending multiformats.io to cloud.ipfs.team in a failed attempt to do lets encrypt cert setup
<dansup>
very clean site (multiformats.io), good work!
<lgierth>
:)
ribasushi has quit [Ping timeout: 240 seconds]
<lgierth>
pinkieval: sorry about that ;) should be fixed now
atrapado_ has joined #ipfs
<pinkieval>
lgierth: no problem
koalalorenzo has quit [Ping timeout: 260 seconds]
<seharder>
lgierth: Yesterday you mentioned you had an interesting idea about how to use jenkins with kubernetes. Could you explain this to me.
<lgierth>
seharder: ah yes, didn't get to that point on my todo list yet :D it's about how to queue and run tests. you'd file a PR with the input, someone merges it, jenkins picks it up, and the respective jenkins worker for that hits up whatever component runs the test
<lgierth>
seharder: so we'd get jenkins UI and worker and artifact infrastructure for free
<lgierth>
and we want to figure out how to have jenkins works with artifacts-in-ipfs anyhow
<seharder>
lgierth: I created an issue in test-lab, I will try to update it so that it reflects this.
<lgierth>
thanks dryajov -- i'm a bit scatterbrained today and just working off my list :)
<dryajov>
lgierth: I think that is clearer ... Basically want to clarify the multistram flow
<dryajov>
lgierth: np! Thanks!
JayCarpenter has joined #ipfs
gmoro has quit [Ping timeout: 246 seconds]
<M-hash>
lgierth : I have some sauce you might be interested in: git change detection integrated with containers integrated with content-addressable image storage that can run builds in k8s.
<M-hash>
It's explicitly *not* jenkins, though :P
s_kunk has quit [Ping timeout: 258 seconds]
<seharder>
lgierth: I think we need to plan a standup for Friday for the test-lab sprint. How should I go about communicating this with everyone
<whyrusleeping>
Gooood morning
<whyrusleeping>
M-hash: where your code be?
pawn has quit [Remote host closed the connection]
<M-hash>
Ah, underpublished atm blush
<lgierth>
seharder: i think standups should ideally be every day, but i'm less of an active party in this sprint so it's not up to me ;) best to open an issue and assign everybody
<M-hash>
Although if you mean in codec land: got the cbor parser passing all tests yesterday, whew.
<Kubuxu>
whyrusleeping: login into chanserv if you can
<whyrusleeping>
chanserv is mean
whyrusleeping has quit [Changing host]
whyrusleeping has joined #ipfs
shizy has joined #ipfs
cemerick has quit [Ping timeout: 246 seconds]
akkad has quit [Excess Flood]
cemerick has joined #ipfs
akkad has joined #ipfs
ribasushi has joined #ipfs
<seharder>
lgierth: I'm afraid I don't have time to have a standup each day.
gmoro has joined #ipfs
akkad has quit [Excess Flood]
<lgierth>
yeeeah :)
<seharder>
lgierth: I have touched base with everyone this morning and I think we are OK for now.
<lgierth>
cool
<lgierth>
thank you
<seharder>
lgierth: I haven't heard from Frank however, but @jonnycrunch seems to have a direction.
<mildred>
whyrusleeping: in case we got some addresses, wouldn't it be better to cache them in the peerstore?
jkilpatr has joined #ipfs
<mildred>
to avoid two peer searches where one would be enough
chris613 has joined #ipfs
<alu>
mildred have you read pattern recognition / spook country?
<mildred>
what's that?
<alu>
just curious, they're good books that have a character named after you
<mildred>
sry, didn't read that
cxl000 has quit [Quit: Leaving]
Mizzu has quit [Ping timeout: 258 seconds]
jkilpatr has quit [Ping timeout: 246 seconds]
Mizzu has joined #ipfs
gwollon is now known as gwillen
jkilpatr has joined #ipfs
wallacoloo_____ has joined #ipfs
matoro has quit [Ping timeout: 258 seconds]
<whyrusleeping>
mildred: yeah, i agree. sending back addresses is probably a good idea
<whyrusleeping>
though i think we wanted to avoid sending tons of addresses that peers already know about
<whyrusleeping>
like, if i'm asking for 100 different provider records
<whyrusleeping>
i don't want to get the same peers addresses 100 times
<mildred>
saving the addresses to the peerstore won't send them to people around ... I'm confused
<whyrusleeping>
no, if we put peer addresses in provider responses
<mildred>
by the way, I had another problem. The peer address was filtered (because both peers are running on my machine and are using the same listend address)
<whyrusleeping>
how does that even work?
<whyrusleeping>
they shouldnt be able to use the same listener address on the same machine