<achin>
it seemed to be a good idea, since --type=recursive seemed to return nothing
<achin>
(Caterpillar maybe you can confirm that when you first ran ipfs pin ls --type=recursive is returned nothing)
<Caterpillar>
achin: mmh? Do you need me?
elico has quit [Ping timeout: 256 seconds]
<Caterpillar>
$ ipfs pin ls --type=recursive do return stuff
sigmaister[m] has left #ipfs ["User left"]
<Caterpillar>
I have to go
<Caterpillar>
good night
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
anonymuse has joined #ipfs
fleeky__ has joined #ipfs
fleeky_ has quit [Ping timeout: 272 seconds]
<achin>
it might be neat to setup ipfs in a docker container for running isolated tests like this (pin some stuff, confirm it's listed, etc)
pfrazee has joined #ipfs
bastianilso____ has quit [Quit: bastianilso____]
bastianilso____ has joined #ipfs
pfrazee has quit [Ping timeout: 248 seconds]
__uguu__ has quit [Quit: WeeChat 1.4]
__uguu__ has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
cyanobacteria has quit [Ping timeout: 245 seconds]
AniSky has joined #ipfs
emunand has joined #ipfs
<emunand>
hello
<emunand>
can someone tell me the ports that IPFS uses?
<emunand>
my router does not have UPnP, so i have to define the ports
<interfect[m]>
I think it's all on 4001 inbound
<interfect[m]>
Look at your node's multiaddrs
<interfect[m]>
The ports are all in there
<emunand>
it says 4001 and 4002
<emunand>
should i just use 4001?
<achin>
the ports should be listed in the "Addresses" section when you run "ipfs id"
pfrazee has joined #ipfs
cryptix has joined #ipfs
<emunand>
yes, i still get 4001 and 4002
<emunand>
but 4001 is tcp while 4002 is udp
<emunand>
so i guess i do that
<achin>
sounds good
<emunand>
i'll tell you how it goes
<emunand>
hopefully there is an orbit version of this irc
<emunand>
seems to have work
<emunand>
and i am no longer randomly disconnecting
<emunand>
thanks for the help
<interfect[m]>
Hooray!
emunand has quit [Remote host closed the connection]
emunand has joined #ipfs
<emunand>
well that didn't work
<emunand>
i guess i will just file a bug on github
<emunand>
thanks for the other help though
emunand has left #ipfs [#ipfs]
<achin>
several people have reported certain cable models have issues with the ipfs peer-2-peer traffic
<achin>
damnit, why do people not hang around!
espadrine has quit [Ping timeout: 248 seconds]
cemerick has joined #ipfs
_Vi has joined #ipfs
<_Vi>
Is there Debian repository with IPFS package that provides the ipfs executable and init.d initscript to allow easy system-wide installation?
<achin>
i'm not sure about a debian repo, but i would just download the ipfs binary, and copy it to whereever you want. (this won't auto-update, though)
<_Vi>
Expected features: 1. auto-update; 2. Automatic creation of unprivileged user; 3. Automatic starts at boot.
<_Vi>
Is IPFS stable enough to be deployed and left without updates on unattended server for a year?
<achin>
IPFS is still changing rapidly, i'm not sure i would be willing to use a 1 year old version :)
<_Vi>
Then how do I do auto-updates? Is IPFS ready for production?
<achin>
i'm not quite sure what you mean by 'production', but the answer is 'probably not'
<achin>
i attribute that mainly to the evolving nature of a lot of important stuff in IPFS
<_Vi>
Here it means 1. installing it and expecting to just work (serving stored files, accepting new files though IPFS) without periodical manual intervention; 2. Developer mindset like "We are production. No breaking changes. Security is serious things now".
<achin>
on point 1) i could imagine that there could be a breaking change in 12 months, that would prevent your old ipfs node from talking with this rest of the network. i'm not aware of any planned changes like this, but it could happen
<achin>
so in my opinion, anybody using ipfs needs to be staying up-to-date with the ipfs development progress
erg has left #ipfs [#ipfs]
<pjz>
there's an 'ipfs-update' cmmand/script
<pjz>
that motly fulfills the 'auto-update' desire
<pjz>
though I don't know if it's fully functional yet
<_Vi>
Can go-ipfs mount (using FUSE) locally cached data while offline?
cholcombe has quit [Quit: Leaving]
<_Vi>
Can I mount /ipfs without /ipfs? Even after I disable mounting /ipns I get "Could not resolve name" with /ipns/...
<_Vi>
*without /ipns
<achin>
if you access something via the fuse layer, yes the data will be cached in $IPFS_PATH (defaults to $HOME/.ipfs)
<_Vi>
achin, Can I read that cache later offline with FUSE?
<achin>
i think so, yes. but i've not tried that myself
<achin>
i can't think of anything that would prevent it from working
<_Vi>
achin, I try to access data I stored into IPFS when I played with it half a year ago (with 0.4.0) and IPNS getting in the way. And I won't mount /ipfs without /ipns.
<_Vi>
achin, It tells me it fails to resolve some host, so mounting failed. It implies it tries to access network even before giving me chance to access my locally cached files.
<pjz>
as far as 'production'... I'm launching a paid pinning service at pinbits.io, so there's that.
<pjz>
you might see errors about the DHT not being able to connect
<achin>
yeah, ipfs will fail to connect to the bootstrap nodes, but i didn't think that was fatal
<achin>
since i've seen people in here asking for help when their node has zero peers
<_Vi>
"ipfs ls" and "ipfs cat" works. How to I get FUSE mount without even thinking about what is IPNS?
<achin>
might need an issue on github for that
<_Vi>
Is IPNS essential part of IPFS or like "upper layer"? Can I work with IPFS alone without IPNS (like using IP addresses without DNS)?
<achin>
yes. ipns isn't required at all, to the best of my knowledge
chris613 has joined #ipfs
<_Vi>
How to disable it then? "ipfs mount --help" documents /ipfs and /ipns mountpoings like left and right hands, allowing only to override mountpoint, not to out out...
<achin>
what was the error you saw when you tried to mount the FUSE layer?
<_Vi>
Actually ~/ipfs mount works for some time after "ipfs mount". But then it automatically unmounts.
<_Vi>
"Error: Could not resolve name."
<_Vi>
Like in multiple Github issues.
<_Vi>
Shall I create one more issue requesting "ipfs mount" to be able to mount ipfs and ipns separately and independently?
bwerthmann has quit [Ping timeout: 246 seconds]
chris6131 has joined #ipfs
cemerick has quit [Ping timeout: 256 seconds]
chris613 has quit [Ping timeout: 248 seconds]
<achin>
do you want independent control of the mount point just to work around this "Could not resolve name" issue?
<achin>
there might be other good reasons to control them independtly
<Mateon1>
Could not resolve name might be related to your own IPNS entry, if I recall correctly, it's mounted at /ipns/local or something, and something else might be trying to enter that directory
<Mateon1>
Try publishing an empty directory
<achin>
since this is an old issue, it would be helpful to update #2167 with an updated status (including what version of IPFS you're running)
cyanobacteria has joined #ipfs
<_Vi>
Filed my own issue: #3563
<_Vi>
Mateon1, I think it will work. But it may fail again in half a year of non-usage.
<Mateon1>
I think FUSE might be mounting before the republisher gets a chance to work, not sure
<achin>
_Vi: what type of linux are you on?
<_Vi>
Linux Debian. Does that matter?
<achin>
probably not
<_Vi>
Can it just mount dummy /ipns that fails or hangs all requests until it gets properly initialized?
<_Vi>
When those publishing (or whatever) is done, /ipns just starts working.
<_Vi>
"ipfs mount" should just start FUSE layer that translates filesystem requests to IPFS/IPNS requests, without trying to check integrity of node or network.
bwerthmann has joined #ipfs
<_Vi>
Is performance penalty for accessing locally cached files over IPFS FUSE mount big or small? Small means like with bindfs or fusexml. Big means like when using sshfs to localhost.
<_Vi>
*fusexmp
<achin>
somewhere inbetween is my guess, but i don't know for sure
<_Vi>
Is FUSE mount designed to handle random access to large files?
<achin>
if you "ipfs get" a hash that is already fully available locally, ipfs shouldn't need to talk to the network at all
<achin>
i believe the FUSE api is sophisiticated enough to handle this, but i don't know if IPFS has the smarts to do this yet
<_Vi>
For example, is VM image on IPFS a bad idea or very bad idea?
<achin>
can VM images be read-only?
<_Vi>
Is IPFS read-only?
<Mateon1>
Well, if you have an IPFS hash, it is immutable, but you can create new hashes representing the changed content
<_Vi>
Is creating hash of a big file based on another big file with just one changed block in the middle a cheap operation?
<Mateon1>
For a VM image/disk, you would have to re-add the image when the VM is shut down.
<Mateon1>
It is relatively cheap
cryptix_ has joined #ipfs
<_Vi>
For example, has somebody experimented with storing Docker images on IPFS?
<_Vi>
(They are readonly or relatively readonly)
<Mateon1>
When adding, the file is split into chunks and hashed, the hashing is relatively quick, but storing new blocks into the datastore is currently a bottleneck
<achin>
there could be interesting applications in applying IPFS layers in some type of overlayfs, and having the writable layer a small local filesystem, with everything else being on ipfs read-only
<_Vi>
For example, what if just store entire /var/lib/docker on IPFS?
<_Vi>
(with overlay2 backend)
cryptix has quit [Ping timeout: 245 seconds]
cryptix_ has quit [Client Quit]
<_Vi>
(except of tmp and containers maybe)
<Mateon1>
/tmp is stored in memory anyway, right?
<achin>
not generally, no
<achin>
oh, hmm
<achin>
mine is actually a tmpfs
<achin>
maybe i'm behind the times
jager has quit [Ping timeout: 248 seconds]
jager has joined #ipfs
dignifiedquire has quit [Quit: Connection closed for inactivity]
infinity0 has quit [Ping timeout: 248 seconds]
infinity0 has joined #ipfs
rawtaz has quit [*.net *.split]
TeMPOraL has quit [*.net *.split]
voker57 has quit [*.net *.split]
Mitar has quit [*.net *.split]
d6e has quit [*.net *.split]
area has quit [*.net *.split]
elasticdog has quit [*.net *.split]
yangwao has quit [*.net *.split]
cwill has quit [*.net *.split]
elimisteve has quit [*.net *.split]
pjz has quit [*.net *.split]
null_radix has quit [*.net *.split]
misuto has quit [*.net *.split]
tperson has quit [*.net *.split]
plddr has quit [*.net *.split]
cwill__ has joined #ipfs
pjz has joined #ipfs
elimisteve has joined #ipfs
tperson has joined #ipfs
voker57 has joined #ipfs
misuto has joined #ipfs
plddr has joined #ipfs
rawtaz has joined #ipfs
area has joined #ipfs
Mitar has joined #ipfs
yangwao has joined #ipfs
voker57 has quit [Changing host]
voker57 has joined #ipfs
d6e has joined #ipfs
area has quit [Changing host]
area has joined #ipfs
elasticdog has joined #ipfs
TeMPOraL has joined #ipfs
infinity0 has quit [Remote host closed the connection]
Kingsquee has joined #ipfs
null_radix has joined #ipfs
infinity0 has joined #ipfs
tilgovi has quit [Ping timeout: 240 seconds]
crossdiver has joined #ipfs
<crossdiver>
hey! /wave
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
<achin>
ohi
anewuser_ has joined #ipfs
bastianilso____ has quit [Remote host closed the connection]
bastianilso____ has joined #ipfs
anewuser has quit [Ping timeout: 258 seconds]
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
tmg has quit [Ping timeout: 256 seconds]
tmg has joined #ipfs
mguentner has quit [Quit: WeeChat 1.6]
mguentner has joined #ipfs
jonnycrunch has quit [Quit: jonnycrunch]
wallacoloo_____ has joined #ipfs
qpqp has quit [Read error: Connection reset by peer]
mguentner2 has joined #ipfs
mguentner has quit [Ping timeout: 245 seconds]
Guest172 has joined #ipfs
bwerthmann has quit [Ping timeout: 248 seconds]
pfrazee has quit [Remote host closed the connection]
bwerthmann has joined #ipfs
arkimedes has quit [Ping timeout: 245 seconds]
infinity0 has quit [Ping timeout: 248 seconds]
arkimedes has joined #ipfs
infinity0 has joined #ipfs
chris6131 has quit [Quit: Leaving.]
anewuser_ has quit [Ping timeout: 248 seconds]
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 248 seconds]
bastianilso____ has quit [Quit: bastianilso____]
jager has quit [Read error: Connection reset by peer]
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
bitcoinAI has joined #ipfs
<bitcoinAI>
Check out my bitcoin trading bot, 200% year on year gains on top of market fluctuations. LF offers. Limited link availability. This trading bot uses state of the art artificial intelligence to predict market closing price deltas. Predictions are posted here for a limited time. http://35.165.163.141:8177/ contact: joseph93 (at) nmsu.edu. Output of the bot: positive values = increase in bitstampUSD price, negative = decrease
bitcoinAI has quit [Quit: Page closed]
mib_kd743naq has joined #ipfs
<mib_kd743naq>
<pjz> test98_0: IPNS is the mapping from a name to a hash
<mib_kd743naq>
^^ why do we keep saying that? I've seen it in documentation as well "mapping from name to hash"
<mib_kd743naq>
it's super misleading...
pfrazee has joined #ipfs
wallacoloo_____ has quit [Quit: wallacoloo_____]
pfrazee has quit [Ping timeout: 240 seconds]
infinity0 has quit [Remote host closed the connection]
elico has joined #ipfs
elico1 has joined #ipfs
arkimedes has quit [Ping timeout: 272 seconds]
elico has quit [Ping timeout: 240 seconds]
infinity0 has joined #ipfs
elico1 has quit [Read error: Connection reset by peer]
mildred_ has quit [Read error: Connection reset by peer]
mildred__ has joined #ipfs
s_kunk has joined #ipfs
<kpcyrd>
whyrusleeping: what do you think about having symlinks to /ipfs/Qm... so you're able to `ipfs add -r` without having all the data locally?
<kpcyrd>
whyrusleeping: that would allow you to create objects pointing to terabytes of data on a small ssd
mildred has quit [Remote host closed the connection]
gts has joined #ipfs
<Kubuxu>
whyrusleeping: do you think that the CHANGELOG lines should be in the initial PR comment?
<G-Ray_>
Hi, is it possible to automatically watch for changes in a folder ?
<G-Ray_>
I would like to replace owncloud with ipfs to share my files
Alastair has quit [Read error: Connection reset by peer]
brixen has joined #ipfs
ELLIOTTCABLE has quit [Ping timeout: 240 seconds]
ELLIOTTCABLE has joined #ipfs
ELLIOTTCABLE has quit [Excess Flood]
ELLIOTTCABLE has joined #ipfs
jkilpatr has joined #ipfs
Boomerang has joined #ipfs
Falconix has quit [Read error: Connection reset by peer]
Falconix has joined #ipfs
mildred has joined #ipfs
disoxygenation has joined #ipfs
zanadar has quit [Ping timeout: 255 seconds]
rendar has joined #ipfs
_Vi has quit [Ping timeout: 240 seconds]
<achin>
mib_kd743naq: what would your proposed phrasing instead?
slothbag has quit [Quit: Leaving.]
anonymuse has quit [Remote host closed the connection]
<mib_kd743naq>
achin: something like "IPNS associates a stable "label" that can point to an arbitrary IPFS node ( e.g. directory or file ). It is somewhat similar to a DNS CNAME record in that it can be only modified by the record owner, but unlike DNS leverages the ipfs DHT for rapid updates and is not impacted by client-caching"
<mib_kd743naq>
needs polish of course, but the main point is to not use "name" as it implies "human readability" which IPNS definitely doesn't provide
Kingsquee has quit [Quit: Konversation terminated!]
<achin>
is "maps a label (ipfs node ID) to a hash" better? not going for the fully complete answer, just the short phrase to get people started
<achin>
or even "maps your ipfs node ID to a hash"
mildred__ has quit [Remote host closed the connection]
<mib_kd743naq>
achin: that's not entirely correct anymore, as the keystore thing is almost out of beta
mildred__ has joined #ipfs
<mib_kd743naq>
and there you get an unlimited amount of keys per ipfs node, all resolvable via /ipns ( or this is at least my understanding )
<mib_kd743naq>
Kubuxu: ^^ correct me if I am wrong...
<mib_kd743naq>
achin: and "maps your node ID to a hash" is not... exactly correct either
<mib_kd743naq>
hence me using a nebulous "label"
<achin>
as far as i know, ipfs today only maps a node ID
<achin>
and indeed only one (ipfs name publish doesn't take any arguments besides the single hash to publish)
<achin>
(publishing multiple keys has indeed been a long and often requested feature)
<mib_kd743naq>
achin: you are correct, current master build does not yet have the `ipfs key` subcommand
<mib_kd743naq>
*however* I still think it is worthwhile describing ipns from the consumer pov
<mib_kd743naq>
i.e. what a user of ipfs.io/ipns/ can see
<mib_kd743naq>
that *currently* this system can only resolve node ID's is an implementation detail
<achin>
yes, i suppose there is a bit of confusion in that a /ipns/domainname path can work without actually using IPNS
s_kunk has quit [Quit: Read error: Connection reset by beer]
<mib_kd743naq>
implementation details are awesome and exciting and thought provoking, but in the end do not belong in "first contact" documentation as they only add confusion in such spots
<achin>
it's a good point
<achin>
i'm not sure i agree with that, but i think i see what you're getting at
s_kunk has joined #ipfs
<achin>
brb, driving into work
pfrazee has joined #ipfs
mguentner2 is now known as mguentner
pfrazee has quit [Ping timeout: 240 seconds]
jchevalay has quit [Quit: Page closed]
nunofmn has joined #ipfs
<achin>
the short reply is that while knowing how to use something is very important, sometimes the best way to explain how to use something is to explain how it works
cemerick has quit [Ping timeout: 256 seconds]
<ansuz>
spoken like a true engineer
<ansuz>
<3
cemerick has joined #ipfs
ylp has quit [Quit: Leaving.]
PseudoNoob has quit [Quit: Leaving]
<whyrusleeping>
current master does have ipfs key
<whyrusleeping>
>.>
<whyrusleeping>
mib_kd743naq: achin ^
<mib_kd743naq>
whyrusleeping: some days ago you poked me to try to break HAMT-dirs, which branch supports this / where is the latest spec/impl for it? ( just general pointers to branches and "start from this .go" are enough )
<achin>
whyrusleeping: neat! clearly i am behind by a few days
<achin>
or weeks
<achin>
or maybe whyrusleeping is magic and has gone back in time to implement this feature
<whyrusleeping>
;)
<mib_kd743naq>
whyrusleeping: ok, likely won't be able to dig into it mid-next-week, but bookmarked for breakage
<whyrusleeping>
sweet, thanks :)
<mib_kd743naq>
whyrusleeping: it seems a rebase is still pending: could you do that before I try to build it next week?
<mib_kd743naq>
lots moved since
maxlath has quit [Ping timeout: 256 seconds]
Guest172 has quit [Read error: Connection reset by peer]
Guest172 has joined #ipfs
silotis has quit [Quit: No Ping reply in 180 seconds.]
s_kunk has quit [Read error: Connection reset by peer]
bwerthmann has quit [Ping timeout: 246 seconds]
maxlath has joined #ipfs
s_kunk has joined #ipfs
s_kunk has joined #ipfs
s_kunk has quit [Changing host]
_Vi has joined #ipfs
silotis has joined #ipfs
<whyrusleeping>
mib_kd743naq: ah, sure
s_kunk has quit [Max SendQ exceeded]
s_kunk has joined #ipfs
nonaTure is now known as unchained
unchained is now known as unchain
chungy has quit [Ping timeout: 258 seconds]
unchain has quit [Quit: Leaving]
unchain has joined #ipfs
unchain has quit [Client Quit]
chungy has joined #ipfs
unchain has joined #ipfs
unchain is now known as aeternity
cwahlers_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
anewuser has joined #ipfs
cemerick has quit [Ping timeout: 256 seconds]
cemerick has joined #ipfs
anonymuse has joined #ipfs
muvlon has joined #ipfs
maxlath has quit [Ping timeout: 248 seconds]
tabrath has quit [Ping timeout: 258 seconds]
tmg has quit [Ping timeout: 272 seconds]
<brodo>
i'm going to be at republica in berlin this year again. anyone else going?
mildred_ has joined #ipfs
<whyrusleeping>
brodo: when is that?
<brodo>
may 8-10
mildred__ has quit [Remote host closed the connection]
<whyrusleeping>
I won't be in germany at that time
<whyrusleeping>
but some others might
kenshyx has joined #ipfs
<brodo>
ok, it's still a log way to go but talks have to be submitted until january the 8th
janoszen_ has joined #ipfs
<brodo>
I am most likely there as part of GIG, so if we want to do a small workshop (max 15 people) we can maybe do it at the GIG booth pretty spontainiously
<janoszen_>
Hello everyone! We have done some digging on IPFS, my question is: how do you create a chat application like Orbit? How does creating and linking content together work?
bwerthmann has joined #ipfs
bwerthmann has quit [K-Lined]
shizy has joined #ipfs
bastianilso____ has joined #ipfs
ashark has joined #ipfs
Akaibu has quit [Quit: Connection closed for inactivity]
webdev007 has joined #ipfs
dignifiedquire has joined #ipfs
pfrazee has joined #ipfs
anewuser has quit [Ping timeout: 245 seconds]
Boomerang has quit [Quit: leaving]
ulrichard has quit [Read error: Connection reset by peer]
cyanobacteria has joined #ipfs
cyanobacteria has quit [Ping timeout: 255 seconds]
ylp has quit [Remote host closed the connection]
zanadar has joined #ipfs
kobajagi has joined #ipfs
kobajagi has quit [Remote host closed the connection]
zanadar has quit [Ping timeout: 272 seconds]
zanadar has joined #ipfs
<r0kk3rz>
janoszen_ look at orbit-db and ipfs-log
baffo32 has quit [Remote host closed the connection]
zanadar has quit [Ping timeout: 240 seconds]
wlp1s1 has quit [Max SendQ exceeded]
baffo32 has joined #ipfs
zanadar has joined #ipfs
iczero has joined #ipfs
reit has quit [Ping timeout: 252 seconds]
john1 has joined #ipfs
zanadar has quit [Ping timeout: 258 seconds]
Guest172 has quit [Ping timeout: 260 seconds]
pcre has joined #ipfs
pcre has quit [Client Quit]
iovoid has quit [Ping timeout: 240 seconds]
iczero has quit [Ping timeout: 245 seconds]
zanadar has joined #ipfs
zanadar has quit [Ping timeout: 245 seconds]
mildred_ has quit [Ping timeout: 272 seconds]
zanadar has joined #ipfs
janoszen_ has quit [Ping timeout: 260 seconds]
_Vi has quit [Ping timeout: 258 seconds]
Boomerang has joined #ipfs
Akaibu has joined #ipfs
mildred_ has joined #ipfs
zanadar has quit [Ping timeout: 246 seconds]
Aranjedeath has joined #ipfs
taaem has quit [Ping timeout: 258 seconds]
kenshyx has quit [Quit: Leaving]
iovoid has joined #ipfs
G-Ray_ has quit [Quit: G-Ray_]
iczero has joined #ipfs
pcre has joined #ipfs
arkimedes has joined #ipfs
<mib_kd743naq>
whyrusleeping: publishing to an alternative key works
<mib_kd743naq>
but performance is a really mixed bag
<mib_kd743naq>
when offline - `ipns name resolve` is instant ( as expected )
<mib_kd743naq>
when the daemon is running in the swarm - a *local* resolution ( against the defining node ) takes forever
<mib_kd743naq>
same goes for ipfs.io/ipns - eventually resolves, but *damn*
iczero is now known as wlp1s1
<mib_kd743naq>
( after some errors )
Encrypt has joined #ipfs
gigq has quit [Ping timeout: 246 seconds]
<mib_kd743naq>
yeah something is clearly amiss...
<mib_kd743naq>
I will leave my node running
gigq has joined #ipfs
<mib_kd743naq>
serving /ipns/QmUQzmTvfgj5LY8yh4WdTfRpwWwCwRhd4W5YvZZsxoWVmY and /ipns/QmTFFqTor9MjGX1Rv5QoTxiR7hjhVkKgdGxbLLtbQmxJg3
jkilpatr has quit [Ping timeout: 240 seconds]
robattila256 has joined #ipfs
_Vi has joined #ipfs
jkilpatr has joined #ipfs
saintromuald has quit [Read error: Connection reset by peer]
cwahlers has joined #ipfs
s_kunk has quit [Ping timeout: 256 seconds]
john2 has joined #ipfs
Oatmeal has quit [Ping timeout: 245 seconds]
<achin>
fwiw, none of that sounds new
john1 has quit [Ping timeout: 260 seconds]
cwill__ is now known as cwill
<mib_kd743naq>
achin: there has been work on the name system in master afaik, so restating how the tip works
<mib_kd743naq>
( pkus me serving 2 ipns entries from the same node is clearly new ;)
<achin>
true! and that's very cool!
<mib_kd743naq>
I think the real problem is that this node is at home, behind a couple nats
<achin>
btw, how are you publishing the other keys?
<achin>
the docs still say "Publish an <ipfs-path> to another public key" is not implemented. maybe the docs are just out of date?
espadrine has quit [Ping timeout: 255 seconds]
ashark has quit [Ping timeout: 258 seconds]
ashark has joined #ipfs
<pjz>
so you can now just make up any number of keypairs and publish to all the different pubkeys?
<mib_kd743naq>
pjz: yes
<mib_kd743naq>
achin: ipfs publish takes a -k option in current master
<mib_kd743naq>
`ipfs publish --help`
<mib_kd743naq>
pjz: `ipfs key gen --type=rsa --size=4096 blah`
<mib_kd743naq>
pjz: `ipfs name publish -k blah <target>`
Encrypt has quit [Quit: Quit]
<pjz>
mib_kd743naq: ...what version is that? oh, 'tip'...I'm running 0.4.4
LiberalSquash has joined #ipfs
LiberalSquash has left #ipfs [#ipfs]
<mib_kd743naq>
pjz: yeah, it's super-new
s_kunk has joined #ipfs
s_kunk has joined #ipfs
s_kunk has quit [Changing host]
jkilpatr_ has joined #ipfs
jkilpatr has quit [Ping timeout: 240 seconds]
Mizzu has joined #ipfs
<pjz>
if I publish my pubkey and a hash, does that live on my node? or is it inserted into the DHT somehow? And if my node goes off, does it go away?
<pjz>
and how's it relate to what goes in a DNS TXT record?
Oatmeal has joined #ipfs
double-you has joined #ipfs
<mib_kd743naq>
pjz: ( my understanding is not 100% but here's what I know )
<mib_kd743naq>
any ipns entry to the dht is propagated through the dht like everything else, and is subject to more or less the same expiry times
<mib_kd743naq>
pjz: the difference is that a running node which "created" the entry replublishes it periodically while running
<mib_kd743naq>
so things look like they are "always there"
<mib_kd743naq>
the DNS TXT part is orthogonal - the txt entry can point either to an /ipns/foo or to an /ipfs/foo directly
wlp1s1 is now known as DogeWithFez
DogeWithFez is now known as wlp1s1
Zer0CooL has quit [Ping timeout: 248 seconds]
Zer0CooL has joined #ipfs
espadrine has joined #ipfs
<pjz>
hmm
cemerick has quit [Read error: Connection reset by peer]
<pjz>
does it take having the privkey to publish something under the pubkey ?
<mib_kd743naq>
pjz: yes, this is where the authentication comes from
<pjz>
k
<pjz>
...the node has to keep republishing it, though? ugh. And there's no way to get someone else to '
<pjz>
..to 'pin' it for me, is there?
<mib_kd743naq>
pjz: not sure about that part
<pjz>
I mean, I can get soemone to pin some content for me
<pjz>
but I can't get them to pin my pubkey->hash mapping for me
<mib_kd743naq>
pjz: well - you can try and pin my pointers I listed above, see what happens
<pjz>
mib_kd743naq: I'd bet it will pin the data but not the name
<mib_kd743naq>
( I don't know if this is even possible, and trying on the node that has them makes no sense )
<pjz>
well, even if it works, I bet it doesn't work for v0.4.4 :)
tilgovi has joined #ipfs
<mib_kd743naq>
yeah... `ipfs dht get QmUQzmTvfgj5LY8yh4WdTfRpwWwCwRhd4W5YvZZsxoWVmY` doesn't return anything for me
<mib_kd743naq>
so I guess it's a different mechanism
<mib_kd743naq>
sorry - I am just a passenger ;)
<pjz>
'sokay
<pjz>
I'm not sure anyone's thought of the problem yet
<pjz>
DNS is its own kind of authentication, I suppose
<mib_kd743naq>
pjz: if you have full dnssec setup, and if the gateways validate it - then yes
<mib_kd743naq>
though no idea if this is the case currently
<pjz>
oh, security isn't quite what I meant, but yeah.
<achin>
dns implies some level of 'ownership'
<mib_kd743naq>
erm... security without authentication is a bit... moot ;)
<mib_kd743naq>
achin: it does, but without dnssec every provider along the way can mitm it
tilgovi has quit [Ping timeout: 255 seconds]
<achin>
yes (though that would be difficult in practice)
webdev007 has quit [Ping timeout: 248 seconds]
<mib_kd743naq>
achin: why do you consider it to be difficult? ( serious question )
f3_ has joined #ipfs
<achin>
let me answer with a question (sorry!). if you knew i was going to resolve google.com at some point, and wanted to hijack that request, where exactly would you mitm?
<mib_kd743naq>
achin: google.com is a difficult example because there are too many things that need to be taken into account
<mib_kd743naq>
however what I would do if I am a bad actor
<achin>
(i'm kinda thinking aloud here, i've not thought carefully about all the threat vectors to dns lookups)
tilgovi has joined #ipfs
cemerick has joined #ipfs
<mib_kd743naq>
is carry around a rogue VM on my laptop that is set to mitm dns and auto-proxy requests of popular repositories ( npm, dist isos, apt repos, the works )
<mib_kd743naq>
and replace all these pieces with backdoored stuff where needed
<mib_kd743naq>
if it is some "trendy hackspace" in Berlin - within a week or two there'll be a catch at *some* spot
<mib_kd743naq>
( yes I know a lot of stuff is supposedly signed, and so on and so forth, but weak spots exist in virtually everything )
<achin>
and we'd probably have to ignore https/ssl
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<achin>
which is fine to ignore
<mib_kd743naq>
so for ipfs-over-dns to be truly thought-free the weak-link of DNS should be secured
<mib_kd743naq>
then one can say "I got this from an IPFS gateway - it's provably correct, and the daemon/gateway did all the proofs already"
<mib_kd743naq>
just like when you `git clone` - you don't doubt you got the right thing
<mib_kd743naq>
( until, in the very distant future (if ever), sha1 preimage attacks become feasible )
webdev007 has joined #ipfs
<achin>
ok, so i'll take back what i said earlier. i had in mind some global attack on a specific domain that would effect everybody. while a specific weakness (like hacking the nameserver) might make this easy, this sounds hard to me in general. but i failed to consider your autoproxy example where you get on some shared local network and cause havok with local clients
<pjz>
mib_kd743naq: so you'd have to start by mitm'ing dhcp so you can specify your own dns server to someone
<mib_kd743naq>
pjz: nah, arp-poisoning the existing dns is enough ( as it is often on the same box as the router ;)
<pjz>
or so you can get access to their packets
<pjz>
mib_kd743naq: lots of people have 8.8.8.8 or 4.4.4.4 hardcoded as their dns these days
<mib_kd743naq>
wifi *usually* mitigates this by client isolation
<mib_kd743naq>
but not always
<achin>
mib_kd743naq is in ur networks, nomming your packets
<pjz>
because of idiots like verizon who hijack failed lookups
<mib_kd743naq>
you are looking at it from the PoV of a target being smart enough to avoid most of this
<mib_kd743naq>
I am speaking from the PoV of an attacker who casts a wide net and has all the time in the world
<pjz>
'smart enough' is just having a hardcoded DN
<pjz>
er, having a ahrdcoded DNS server listed and ignoring the one dhcp hands out
<pjz>
but it's a fair point
<mib_kd743naq>
pjz: there are automated toolkits that do dns poisoning out there too
<mib_kd743naq>
i.e. listen for udp 53 queries
<pjz>
:nods.
<mib_kd743naq>
and floods the network with "false" responses before the real one arrives
<mib_kd743naq>
( that's the part dnssec really is designed to protect from )
<achin>
just think
<achin>
in our lifetimes (ok, not really sure how old you all are)
<achin>
we had world-reaadable /etc/passwd files
maxlath has joined #ipfs
<mib_kd743naq>
to put it another way: IPFS is already a long way there providing a self-verifying secure "network overlay". If by default the gateways/clients can do the extra work to validate what they can on the dns side: they should
tabrath has joined #ipfs
<pjz>
hmm. maybe they should try and 'manually' do dnssec validation when possible?
<mib_kd743naq>
that's a question for the old hats at this point ;)
<mib_kd743naq>
pjz: perhaps file an ipfs/notes ticket