nessence has quit [Read error: Connection reset by peer]
Matoro has quit [Ping timeout: 260 seconds]
nessence has joined #ipfs
arpu has joined #ipfs
<spikebike>
hrmpf
<spikebike>
$ ipfs get /ipfs/QmdVGBGisVyit787cv9fGgMXnfw5ZwBRZhgWZcpenvKFgc/blog/img/banner.png
<spikebike>
Error: read tcp 127.0.0.1:48109->127.0.0.1:5001: read: connection reset by peer
<jbenet>
hm
<achin>
daemon crash?
Matoro has joined #ipfs
<jbenet>
this is pretty awesome, so DNS + IPFS (even without IPNS) is actually really viable as a "deployment of static content". 60s TTL to cache invalidate is not even that bad.
<victorbjelkholm>
is there a list of things that pinbot have pinned? Or any other content that might be good to use in my testing for openipfs?
<spikebike>
go-ipfs/0.3.9-dev running, I'll keep an eye on it
fingertoe has quit [Ping timeout: 246 seconds]
kerozene has joined #ipfs
pfraze has quit [Remote host closed the connection]
jhulten has quit [Ping timeout: 244 seconds]
lhobas has quit [Ping timeout: 250 seconds]
RX14 has quit [Ping timeout: 250 seconds]
r04r has quit [Ping timeout: 250 seconds]
lhobas has joined #ipfs
RX14 has joined #ipfs
reit has joined #ipfs
acidhax has joined #ipfs
r04r has joined #ipfs
flounders has joined #ipfs
ipfs_intern has quit [Ping timeout: 246 seconds]
r04r is now known as zz_r04r
nicolagreco has quit [Remote host closed the connection]
nicolagreco has joined #ipfs
ygrek_ has joined #ipfs
fingertoe has joined #ipfs
jhulten has joined #ipfs
ygrek has quit [Ping timeout: 250 seconds]
f[x] has joined #ipfs
tilgovi has joined #ipfs
ygrek_ has quit [Ping timeout: 264 seconds]
f[x] has quit [Ping timeout: 260 seconds]
voxelot has joined #ipfs
hoboprimate has quit [Quit: hoboprimate]
hoboprimate_ has joined #ipfs
hoboprimate_ is now known as hoboprimate
hoboprimate has quit [Client Quit]
pfraze has joined #ipfs
M-jgrowl has quit [Remote host closed the connection]
M-whyrusleeping has quit [Remote host closed the connection]
erikj` has quit [Remote host closed the connection]
M-trashrabbit has quit [Remote host closed the connection]
M-oddvar1 has quit [Remote host closed the connection]
M-rschulman1 has quit [Remote host closed the connection]
M-matthew1 has quit [Remote host closed the connection]
M-nated has quit [Remote host closed the connection]
M-giodamelio1 has quit [Remote host closed the connection]
M-rschulman2 has quit [Remote host closed the connection]
M-jfred has quit [Remote host closed the connection]
M-amstocker has quit [Remote host closed the connection]
M-hash has quit [Remote host closed the connection]
M-edrex1 has quit [Remote host closed the connection]
M-mistake1 has quit [Remote host closed the connection]
M-jbenet1 has quit [Remote host closed the connection]
M-staplemac has quit [Remote host closed the connection]
davidar has quit [Remote host closed the connection]
domanic has quit [Ping timeout: 260 seconds]
fingertoe has quit [Ping timeout: 246 seconds]
cemerick has joined #ipfs
nessence has quit [Remote host closed the connection]
dd0 has joined #ipfs
nessence has joined #ipfs
M-edrex has joined #ipfs
M-rschulman1 has joined #ipfs
M-giodamelio has joined #ipfs
M-matthew has joined #ipfs
M-trashrabbit has joined #ipfs
M-jfred has joined #ipfs
fingertoe has joined #ipfs
M-erikj has joined #ipfs
M-nated has joined #ipfs
M-staplemac has joined #ipfs
M-whyrusleeping has joined #ipfs
M-jgrowl has joined #ipfs
M-mistake has joined #ipfs
M-jbenet has joined #ipfs
M-prosodyContext has joined #ipfs
M-amstocker has joined #ipfs
M-davidar has joined #ipfs
M-rschulman has joined #ipfs
M-hash has joined #ipfs
M-oddvar has joined #ipfs
<nicolagreco>
I wonder to what extent the web should be permanent and how I can use ipfs for things I don't want to be permanent
<achin>
do you make a dictinction between
<achin>
"things i don't want to keep permanent" and "things i don't ever want to be permanent"
<nicolagreco>
this is the point I was going to make
cemerick has quit [Ping timeout: 240 seconds]
<nicolagreco>
so, to give you an example
<nicolagreco>
imagine I have a blog post online, now, I can give you the link to it
<achin>
i think there are plenty of use cases when you might want to pin something for a few hours or days, but then after that garbage collect it
<nicolagreco>
imagine if I made an incredible grammar mistake, that I want to fix
<nicolagreco>
since I own the url content, I can just change it, and you won't notice on reload
<nicolagreco>
now if I use ipfs - as it is (hence consider no crypto going on)
<nicolagreco>
I will be sending you two hashes
<nicolagreco>
and you may download both
<achin>
i think in the world of IPFS, you use ipns+js (or something similar) to put a little message at the top of the blog "this blog has a newer version, please find it here: "
<nicolagreco>
and since you have the resource, the resource won't really ever be deleted
<nicolagreco>
so basically, the web as it is it "futile" in the sense that it doesnt store web pages
<nicolagreco>
which in other words means that, unless you are actively making a copy
<nicolagreco>
my grammar mistake will pass it fine
<nicolagreco>
while with ipfs, you download things by design
<nicolagreco>
achin my problem is not about giving the newer version to the users
<nicolagreco>
my point is about removing the grammar mistake from the web
<achin>
aren't those the same thing?
<nicolagreco>
what do you mean?
<achin>
that's the difference between deleting something from teh web (the grammar mistake), and helping all visitors migrate to the latest version that doesn't ahve the grammar mistake
<fingertoe>
I suspect if you where proactive you could pre-program your page to redirect to the newer version, if such thing exists.
<achin>
(i know there is a difference, but is that difference meaningful?)
<nicolagreco>
redirecting does not remove the grammar mistake from the web
<nicolagreco>
my worry is on the first case
<nicolagreco>
let me give you a more practical example
<achin>
so i think there are actually two much more interesting examples
<nicolagreco>
(this happened to me)
<nicolagreco>
I write a blog post that has a particular picture
<achin>
the first is: you didn't just make a grammar mistake or a typo, but in your blog post you by mistake released one of your companies trade secrets
<nicolagreco>
I post it online, I share it and now x thousand users have my post with the picture
<nicolagreco>
The photographer is upset with that picture since he owns the copyright
<nicolagreco>
he tells me to remove the picture and I do so
<nicolagreco>
however he calls me back saying "I SAID REMOVE THE PICTURE"
<nicolagreco>
and I reply, what do you mean? I updated my blog
<nicolagreco>
and he gives me the ipfs link of the previous hash
<achin>
ok, i like that example. what happens today (in normal WWW + HTTP) when that happens?
<nicolagreco>
and he wants me to remove that picture
<nicolagreco>
if I can't explain how to remove it, he will take legal actions
<nicolagreco>
and my mistake in putting that picture that __cant be removed__
<nicolagreco>
will cost me really a lot
<Xe>
don't post things you don't have permission to
<nicolagreco>
Today, it happens that the photographer calls me and says remove the picture
<spikebike>
well there's what's legal and not, which is sadly unrelated to the ability to be sued
<nicolagreco>
and I remove the picture
<nicolagreco>
Xe well, the world is much more complex than that
<spikebike>
and 100 other sites pick it up
<achin>
even today there are plenty of examples where content cannot be erased from the web
<Xe>
nicolagreco: figthing the striesand effect is futile
<spikebike>
nicolagreco: you carefully say "I am no longer publishing that picture".
<nicolagreco>
yes achin, but if I post a picture on facebook that I should not
<nicolagreco>
facebook tells me that, and removes it
<achin>
but you are right: removing mistakes from IPFS is hard
<ion>
‘<achin> i think in the world of IPFS, you use ipns+js (or something similar) to put a little message at the top of the blog "this blog has a newer version, please find it here: "’ I'd rather envision IPFS objects having an optional field for an IPNS path for its latest version and IPFS-capable browsers saying “there seems to be a newer version of this, click here to open it”.
<achin>
ion: you could do that today with a specially named link
<spikebike>
various caches, proxies, copycat sites, wayback machine, etc.
<spikebike>
nicolagreco: keep in mind that your IPFS site can have /mydir/foo/specialpicture.jpg
<spikebike>
you can change what specialpicture.jpg points to, but not what a hash points to
cemerick has joined #ipfs
<nicolagreco>
interesing
<nicolagreco>
well that points to a hash anyway
<nicolagreco>
and people will have the content of the previous hash
<nicolagreco>
or on clients it is hashed as ipns?
<spikebike>
just like with the web, you can't control other people's caches
<nicolagreco>
yes, but that's fine
<spikebike>
any anyone that can right click can download an image from the web
<nicolagreco>
yes I totally agree
<nicolagreco>
but there is a difference in the two cases
<nicolagreco>
one is opt-in and one is opt-out
<nicolagreco>
so to give you an example
<nicolagreco>
let's build a ipfs snapchat
<nicolagreco>
and a web snapchat
<spikebike>
and keep in mind users can choose to let javascript update the content to a newer version, but clients don't have to enable javascript
<nicolagreco>
now imagine I sent you a picture that you don't really see any value in it
<nicolagreco>
you don't screenshot it
<nicolagreco>
and you basically loose it forever
<nicolagreco>
a month has passed
<spikebike>
thought snapshat allows replay now for $1 or somesuch
<spikebike>
anyways, go ahead
<nicolagreco>
and you see in the news that that picture contained a magic number to enter to win a car
<nicolagreco>
now there is no way you can go back to that picture
<nicolagreco>
while with ipfs you can
<spikebike>
maybe, maybe not
<nicolagreco>
well you have the hash of the picture
<spikebike>
even then, maybe, maybe not
<nicolagreco>
you are more prone to store it
<nicolagreco>
and actually my example has a flaw
<spikebike>
well snapshat is mostly user -> user right?
<nicolagreco>
the assumption is that I send the pictures to all of my friends
<spikebike>
not user -> community
<nicolagreco>
you can broadcast to all of you friends
<spikebike>
right, but they don't download unless they want to
lgierth has joined #ipfs
<nicolagreco>
you download to view it
<spikebike>
and if they do they might well garbage collect it a week later
lgierth has quit [Client Quit]
lgierth has joined #ipfs
<nicolagreco>
yes the garbage collector is my only hope
<spikebike>
the plan is to enable gc by default
lgierth has quit [Client Quit]
<spikebike>
it's not that way today though (afaik)
lgierth has joined #ipfs
<nicolagreco>
that is fine
<nicolagreco>
as long as it will be enabled by default
<spikebike>
pub/sub is planned for ipfs as well, but not yet
<spikebike>
(pub/sub is a better model for snapchat like stuff)
kpcyrd has joined #ipfs
<stoopkid>
ok so, if i want to make a wiki on IPFS, the immutability seems to make it make more sense to try to store an immutable edit history for each page, rather than to attempt to make 'mutable pages'
<stoopkid>
but..
<stoopkid>
it seems even if i did this mutability between different wiki pages would still be an issue
<spikebike>
stoopkid: why not both? default to newest, but allow seeing previous versions, like pretty much all wiki's I've seen
<nicolagreco>
btw, spikebike, achin, in europe the _right to be forgotten_
lgierth has quit [Client Quit]
<nicolagreco>
is actually a very hot topic
lgierth has joined #ipfs
<nicolagreco>
and permanence breaks that
<kpcyrd>
victorbjelkholm: would you consider setting up h.openipfs.xyz?
<spikebike>
nicolagreco: yeah, but it remains to be seen if that can be applied to software as opposed to companies
<spikebike>
especially software running on computers outside your company
<spikebike>
country I meant
acidhax has quit [Ping timeout: 255 seconds]
voxelot has quit [Ping timeout: 272 seconds]
<nicolagreco>
I actually have a question
<stoopkid>
well, i guess i'm not sure what's required to get permanent links to mutable pages on ipfs, so the immutable edit history was kind of a partial solution to this
<nicolagreco>
if I add something on ipfs with ipfs add
<nicolagreco>
are the other peers notified that I added an object?
<nicolagreco>
hence, is my object distributed through any peer before anybody else except me looks for that?
domanic has joined #ipfs
<fingertoe>
Really once you publish stuff to the web, all you can do is ask kindly for everybody to take it down. Once it is in the wild it is in the wild. IPFS doesn't really change that too much.
<spikebike>
hrm
<spikebike>
IPFS could offer a take down request that could be signed by the publisher
<spikebike>
clients could ignore such, but the default could be to honor it
<kpcyrd>
maybe that should be communicated, I think some people assume you can't access a file unless you know the hash
<spikebike>
stoopkid: IPNS can point to a hash
<spikebike>
stoopkid: run ipfs ls /ipfs/QmdVGBGisVyit787cv9fGgMXnfw5ZwBRZhgWZcpenvKFgc
<achin>
nicolagreco: i do think you have a lot of good points. perhaps this is why archiving existing Open data is a popular thing to do on ipfs -- no personal data to consider
<stoopkid>
hm, well the problem is that each page on the wiki would basically have to have an ipns address, and i'm not sure how to do this without a node for each page
<spikebike>
stoopkid: no a node, just a file/dir
<spikebike>
not rather
<achin>
ultimately there should be an article or other resource that details waht the ipfs model means for privacy laws, safe harbor laws, and other things
<nicolagreco>
yes, this is exactly where I am headin achin
<nicolagreco>
spikebike I like the take down idea
<nicolagreco>
and I do appreciate all of your views on the topic
<spikebike>
nicolagreco: that way everyone knows it's a valuable photo and will republish it 100 times
<nicolagreco>
and I clearly see the advantages of IPFS
<spikebike>
I admit when I get email saying (sorry ignore that) I usually just delete it
<nicolagreco>
but it happens that I love IPFS and I work on Decentralized Personal Data
<stoopkid>
mmm, but just pointing the node to a file/dir doesn't solve the issue of mutable pages with permanent links between wiki pages, essentially any time any page on the wiki got updated, every page that links to it would have to get updated as well
<spikebike>
but when I see a "the sender is revoking this message" I take a very close look
<kpcyrd>
hetzner blackholed one of my IPs today.. forgot to blacklist some of the private IP ranges
<nicolagreco>
and I would love to find ways to have the good bits of ipfs
<spikebike>
stoopkid: no
<achin>
someone who is really diligent could dig up relevant case law in various countries about these topics
<nicolagreco>
achin actually we have all the harvard cyberclinics on our side
<fingertoe>
Unless somebody pays, most folks aren't going to pin content just for the sport of pinning content are they? If you stop publishing your own content it is likely to disappear unless others have pinned it, and why would they? Correct me if I am wrong.
<nicolagreco>
if we write down a list, I am sure the guys over there are waiting for very controversial stuff
<stoopkid>
so, each wiki-page on the ipfs-wiki would need a node running in order to have a peer address for ipns to redirect to the file
<spikebike>
fingertoe: for popular content (similar to a popular bittorrent swarm) there will always been peers with it cached (but not pinned)
<nicolagreco>
fingertoe yes, I agree with your thinking - I think I was very biased for not considering the gc
<spikebike>
so sure there's a hash that points at that index.html, but you can use the human friendly URL with IPNS to replace the big hash
<achin>
but also consider an extreme case: some hacker publishes a blog post containing the latest sony DRM keys. you pin this blog post. can Sony come after you legally?
<nicolagreco>
anyway, I will be writing a set of essays on the different design aspects to take into consideration when designing a new web
<nicolagreco>
achin if sony can find who you are, absolutely yes
<nicolagreco>
you are holding illegal content on your machine
<spikebike>
so for immutable objects use the hash if you want, for objects you expect to update use a name
<achin>
nicolagreco: i think i agree
<ion>
stoopkid: First, relative links from IPNS pages work fine. Second, the daemon will support multiple keys and multiple IPNS names in the future.
<achin>
nicolagreco: what about this scenario: instead of pinning the blog post, you just view it, and a copy is cached on your node. can Sony come after you legally?
<spikebike>
achin: yes
<spikebike>
well where you = ip address
Oatmeal has joined #ipfs
<nicolagreco>
whenever you have illegal content
<nicolagreco>
and the police breaks in your house
<stoopkid>
ion: well, the first part, IPNS pages = two IPNS nodes, right? because of the 'future' part of the second part?
<spikebike>
sadly even if the content is legal 8-(
<stoopkid>
two IPNS pages*
<nicolagreco>
and fins that content, you are in troubles
<nicolagreco>
but the real question often is on the way they will find you eventually
<nicolagreco>
but in a way, this is the way it should be
<achin>
i think i also agree
<achin>
but notice how this is quite different than "normal" HTTP web browsing
<spikebike>
if you set your GC less than the DMCA required response time that should mostly be automatically fixed
<nicolagreco>
yes achin, exactly
<ion>
stoopkid: Two IPNS pages: /ipns/QmFoo/bar, /ipns/QmFoo/baz
<nicolagreco>
is there a way I can file a complaint for a content I don't want online?
<nicolagreco>
on ipfs i mean
<achin>
no. at the moment i think you could only complain to gateway owners
screensaver has quit [Ping timeout: 256 seconds]
<spikebike>
nicolagreco: or join the irc channel and whine ;-)
<stoopkid>
ion: isn't that one node using multiple IPNS names?
<nicolagreco>
hahaha
<spikebike>
this is extremely similar btw to taking down a seedless torrent
<ion>
stoopkid: No, just the one QmFoo.
<nicolagreco>
it is the same case I guess
<nicolagreco>
spikebike achin do you have a twitter account?
<spikebike>
sorry, this is extremely similar btw to taking down a trackerless torrent
<spikebike>
nicolagreco: yes, but it's pretty much read only
<spikebike>
(and rarely read for that matter)
<nicolagreco>
I see
<nicolagreco>
how do people get in touch
<nicolagreco>
is a big question of the future
<achin>
i do, but same for me. i don't really monitor it
<ion>
The people who run the ipfs.io public gateway will publish the list of DMCA requests they receive in a machine readable form. You can choose to opt in to using that list.
<spikebike>
in both cases you can search a DHT for who has a checksum, find the peers, and feed them to lawyers/bots.
<nicolagreco>
yes
<nicolagreco>
I think I will need to study the way webrtc works
<spikebike>
nicolagreco: the IPFS based irc replacement isn't ready yet ;-)
<nicolagreco>
I wonder if via webrtc peers know each other ip
<spikebike>
they do
<nicolagreco>
but I would guess so
<fingertoe>
MaidSAFE will probably be the Bittorent replacement, not IPFS... IPFS is going to be better for mainstream web stuff..
<nicolagreco>
I havent read about maidsafe yet
<spikebike>
fingertoe: dunno, ipfs and torrenting is pretty similar, what makes maidsafe better?
<nicolagreco>
and even actually
<nicolagreco>
I might ask the most provocative question
* stoopkid
gets provoked
<nicolagreco>
if I could map bittorrent magnets to /torrent/*
<fingertoe>
Maidsafe chunks every file into thirds, encrypts the 3 files, then sends them out to designated host vaults.
<nicolagreco>
what is the difference betweent torrentfs and ipfs?
devbug has quit [Ping timeout: 250 seconds]
<spikebike>
fingertoe: so to download you just need the equivalent of a magnet string that's a checksum and a password?
<fingertoe>
Its a double blind system -- You don't know where your data is being stored and the folks storing the data don't know what it is -- It's all routed through DHT - IP addresses are stripped, so it will be pretty impossible for IP folks to stop
<ion>
IPFS specifies and uses a global Merkle DAG format for one.
<spikebike>
fingertoe: that seems unlikely to be very effective
<fingertoe>
If you know the correct 3 (or more files) to request from the network you can reconstruct the original, but without the datamap it is all mysterious.
<spikebike>
fingertoe: so I I download 1GB from maidsafe and it comes form 3 hosts, my ISP (or my router) won't know which 3 hosts it came from?
<ion>
And everyone having the same data will be a potential seeder for it, people aren't split into islands by having slightly different torrent metadata for the same content.
<nicolagreco>
ion is that the only difference?
<spikebike>
ion: yeah, that's one of my favorites, thus torrents that have 90% of the same content take up much less space
<fingertoe>
They will see traffic, but it is all encrypted coming from different nearby nodes -- but those nodes don't hold the files, they just relay them.
<spikebike>
fingertoe: so it's radically slower becuase that 1GB has to bounce through multiple nodes.
<fingertoe>
It isn't supposed to be -- we will see when it is done... It does use caching etc...
<spikebike>
fingertoe: so say now I have a few dozen peers spread around the maidsafe network and have them all download 1GB and stay online for a week. You don't think by watching IPs, traffic, latencies, and requests I couldn't figure out exactly where those blocks are coming from?
<ion>
IPFS also has things like IPNS and will have pubsub. Its hash function is upgradeable.
<nicolagreco>
but those are all addons
<fingertoe>
Not likely,, And if you shut down where they where coming from there are 4-12 more vaults serving the same data
<achin>
ion: there was also a "multicodec" thing at one point. like multihash, it allowed the merkledag format to be upgradable (or maybe even changable)
<spikebike>
fingertoe: seems unlikely to provide any real protection. If someone visits a forum and gets a link to something illegal on maidsafe, any node participating in that download (visible on the network layer) would likely get a physical visit from the police
pjz has quit [Remote host closed the connection]
<spikebike>
at least with IPFS you'll never participate in a download of content you didn't actually ask for
<fingertoe>
No node gets anything that is identifiable though. Just encrypted chunks.
<fingertoe>
that cannot be unencrypted without the other chunks.
<fingertoe>
Just noise on a wire.
<kpcyrd>
spikebike: opened my link?
<fingertoe>
It is all engineer to be as untraceable as possible...
nessence has quit [Remote host closed the connection]
<doublec>
fingertoe: SAFE (and other freenet like systems) give plausible deniability. It's difficult to tell if a node is a requester, originator or just a node inbetween.
nessence has joined #ipfs
<kpcyrd>
is was supposed to be smoother, but then I'd need to setup a site on http://, therefore I need a redirect
<fingertoe>
I was thinking about trying to set up the Self Encryption from safe to use on IPFS.. I think that might be handy -- but probably less secure than it is on safe..
<fingertoe>
You would need for the files to be spread over multiple servers for it to be too effective at all...
<fingertoe>
The problem I see for MaidSAFE is that it is a walled garden - Kinda like freenet -- for most legit uses I think IPFS is a better solution in that it can slide under the existing web without the users deciding anything.
<doublec>
fingertoe: how so? IPFS provides a web proxy to access it. So does freenet. I assume MaidSAFE will too.
dlight has quit [Ping timeout: 255 seconds]
<nicolagreco>
can I add some content to ipfs without being in the ipfs network?
<fingertoe>
basically when you log into SAFE you have a username password and pin that are hashed together to store your identity.. From there it is kinda simular to the Pin in IPFS -- You have your datamaps hidden in the cloud at the random hash...
voxelot has joined #ipfs
voxelot has joined #ipfs
<spikebike>
fingertoe: from what I can tell publishing encrypted content to IPFS is much like publishing files in maidsafe
<fingertoe>
All the files are stored in a DHT but cut into at least thirds and encrypted -- It is an automated network --
<nicolagreco>
running a website like ipfs.pics can be very risky!
<nicolagreco>
if I now drop all of my copyrighted pictures
<spikebike>
if I sell a bunch of illegal photos for $50 it's not going to be fun explaining to a judget that 1000s of photos came from my computer but I didn't know they where there
<nicolagreco>
ipfs.pics will be the node that serves them all
<fingertoe>
You don't host the files you publish. They are sent to vaults all around the world.. Your computer stores encrypted chunks of other people' files, but you don't know who's and because you don't have a complete piece, there isn't really a what
<spikebike>
fingertoe: ya, but every node is a vault, so illegal photos might come from your IP address
<stoopkid>
ok so, lets say now my wiki gets past the point that one node can store it all, well, i could put different pages out onto different nodes, and still have permanent links even if the two linked pages are on two different nodes
<stoopkid>
but, what if those nodes change
<fingertoe>
Well, encrypted chunks of what may be illegal may be stored in your vault, but you won't know it, nor will anybody else..
domanic has quit [Ping timeout: 255 seconds]
<spikebike>
fingertoe: except the FBI agent that paid $50 for kiddie porn.
<fingertoe>
Not really. He doesn't know where the data came from.. The network just looks it up and delivers it. It doesn't tell him how.
<orzo>
does ipfs get funding?
<spikebike>
fingertoe: yeah, but he'll see what IPs participated in the download and might well confiscate every node that participated.
tilgovi has quit [Ping timeout: 252 seconds]
<fingertoe>
Not really -- He will only see the boostrap nodes -- the IP's are stripped in the rest of the routing..
<spikebike>
this idea that you aren't going to get in trouble because illegal content is encrypted and you might, or might not, have the key hasn't been tested in court.
<spikebike>
fingertoe: stripping IPs doesn't help, it's obvious who sends you packets.
<fingertoe>
Yes, but those packets are meaningless gibberish.
<fingertoe>
Time will tell It isn't absolutely impossible, but it makes it damn high hanging fruit.
<fingertoe>
The FBI cannot take down Bittorent and bittorent doesn't try nearly as hard as MaidSAFE.
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed feat/utp from 4be92ef to 860b5ca: http://git.io/vcXyV
<ipfsbot>
go-ipfs/feat/utp 5eb6eec Jeromy: vendor in utp code...
<spikebike>
fingertoe: er, bittorrent clients are busted regularly.
<lgierth>
kyledrake: ping
<fingertoe>
Sometimes, but I can still download darn near anything I want. And those guys don't encrypt their data -- The don't route it randomly etc.
<fingertoe>
If you grab one of those machines you will find files on it that are obviously illegal. If you raid a maidsafe computer not so much.
<spikebike>
fingertoe: right, but in either the torrent or the maidsafe case you start with the magneturl/checksum/password whatever for the illegal file
<spikebike>
in either case you find that there's a list of IPs sending you data to build/decrypt your illegal file
<fingertoe>
We will see. You are of course assuming that it is illegal... But the bulk of it may be legal stuff. Bitcoin private addresses etc... There are plenty of legit uses.
<doublec>
spikebike: do ISPs get busted for routing packets that contain portions of 'illegal' files?
<jbenet>
fingertoe: i wouldn't be so believing of statements that purport perfect safety. besides, convergent encryption reveals that bad blocks are being stored.
<fingertoe>
I don't think anyone porports perfect safety.
<fingertoe>
Just make the fruit as high hanging as possible.
<spikebike>
doublec: routing is quite different from storing/retreiving, and yes ISPs have an DMCA officer for trackign such things and interacting with law enforcement to ensure they have safe harbor. Do you have a DMCA officer?
<jbenet>
you should all read up on how we will be handling "illegal or questionable bits" in the clear, and how IPFS nodes will be able to comply with whatever codes of conduct they wish to uphold.
<doublec>
spikebike: not storing files, routing packets
<doublec>
anyway, it's a useful distinction for ipfs to make it easy to track data
<spikebike>
doublec: well for routing packets they get contacted by law enforcement and they provide the person for paying for the IP
<doublec>
in the "what does ipfs vs madesafe vs freenet vs X" debate
<jbenet>
including the ability to set own codes of conduct for clusters. (this is an important piece for people wanting to run services that may store user content that must abide by some community code of conduct. (e.g. no pornography or hatespeech in a children's website/webapp, etc)
<kpcyrd>
nicolagreco: installing ipfs on a desktop is risky because opening a random link in a regular browser is enough to cache content one somebodys computer
<doublec>
as practicality wise, the ipfs approach will be much more palatable as a result and will likely be more successfull
<jbenet>
btw doublec: not so easy when nodes are on tor or i2p
<jbenet>
none of these are live today, but people are working toward that and the ipfs design allows for it
<stoopkid>
jbenet: how would a code of conduct be upheld?
<kpcyrd>
just because ipfs.pics has a public web interface doesn't mean I can't distribute images with nodes that don't
<jbenet>
stoopkid: by maintaining per community ref denylists, and opting in to follow them. you can read more in the faq/notes i dont have time to explain it all atm.
<nicolagreco>
I want to be kept up to speed with the policy related issues (I raised some up this conversations)
<kpcyrd>
by the nature of a hash, flipping an irrelevant bit is enough to bypass the blacklist tho
<spikebike>
seems unlikely that anyone would use IPFS for all their surfing, seems like a set of white/black lists would enable people to not cache their sensitive bank information, but still cache the random surfing they do.
<spikebike>
kpcyrd: but you have to download to flip
<kpcyrd>
original idea was just pulling the js from 127.0.0.1 when available, but mixed content was kicking in so I had to add a redirect
<kpcyrd>
at which point I just could've redirected to a txt.. hrm
<jbenet>
kpcyrd: please read our code of conduct. I'm going to have to warn you not to do that again, and certainly not with anything actually bad. obviously, the drm key is all over the internet (including google search results), and "it shouldn't be a big deal", however we don't allow "proving of points" by causing people to do something that may be illegal. one
<jbenet>
might do the same with something much worse. We treat this sort of thing as equivalent to linking to an HTTP page that auto-downloads material (that is the analogue)
<jbenet>
it is possible in the HTTP web already, just as easily, and with equal consequences.
jhulten has quit [Ping timeout: 250 seconds]
<jbenet>
afk
<kpcyrd>
jbenet: so no more free PoCs? k
<deltab>
there is a difference though in that your browser's cache isn't normally accessible to others
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed feat/utp from 860b5ca to c1ef091: http://git.io/vcXyV
<fingertoe>
spikebike: Every conversation on SAFE is encrypted -- I don't think it is as easy as listening for a specific magnet to be transmited because the feds would have to hack every key - and they rotate... Plus you turn the computer off you will have different files next time.
<fingertoe>
I think it is much more robust than people give it credit for who haven't been reading and watching etc. But time will tell -- it certainly is an unproven product until it is further along than it is.
<spikebike>
fingertoe: you are thinking it from the wrong direction
<fingertoe>
Okay -- shoot -- Explain..
<spikebike>
straying a bit far from #ipfs though, don't want to annoy folks
Guest73396 has joined #ipfs
<fingertoe>
Yah, we are off topic -- My only point was I think there are much better technologies that are going to attract the pirates... IPFS is a better mainstream tool..
<fingertoe>
I do like the idea of stealing some of the technology and layering it into IPFS though.
<fingertoe>
For legitimate uses of course..
<spikebike>
encryption is on the todo
<spikebike>
but currently seems like static content and pub/sub are higher on the list
<tilgovi>
I have this idea forming to use shared one time passwords generation to cycle rendezvous keys. I'm wondering if this would provide a way to do secure presence, in other words, notify peers of availability without making clear which peers you care about. That's a kind of subscription, though, which is why I asked about pub/sub.
<M-davidar>
tilgovi (IRC): so, in terms of commenting, I think we should be able to do something similar to what zeronet does for a start
<tilgovi>
I'm not familiar with how they do it.
<M-davidar>
website owners have to whitelist commenters, by publishing a list of their public keys
<M-davidar>
Readers then aggregate them all client side
thelinuxkid has joined #ipfs
<M-davidar>
Posting a comment means publishing it under your ipns address
<tilgovi>
yes to ipns
<tilgovi>
but what's this about whitelist commenters?
<M-davidar>
So, the way zeronet works, is you submit a request to the site owner to be able to comment, then they can approve or deny it (like a moderation system)
<tilgovi>
ohhhh
<tilgovi>
Right
<tilgovi>
I'm not so concerned about that
<M-davidar>
It's not ideal, but it should be reasonably easy to implement
<tilgovi>
Because so much of my experience is in side-band delivery
akkad has quit [Quit: Emacs must have died]
<tilgovi>
You don't need the publisher to deliver your comments. You get them from the communities or individuals you share comments with.
<M-davidar>
Yeah, definitely
<M-davidar>
That's what we want to do in the long term, but it's trickier ;)
<tilgovi>
notification (pingback) is definitely great, to give publishers that option
<tilgovi>
but I think we're saying the same thing
amstocker_ has quit [Ping timeout: 240 seconds]
<tilgovi>
whitelist == authorization
<tilgovi>
Whether the annotation is posted to the publisher, or canonically/first elsewhere
<M-davidar>
So the purpose of the moderation system is that you don't have to worry about discovery and large scale aggregation
<M-davidar>
Don't get me wrong, I'm not saying this is a good way to do it, but it might be an easy first step
<M-davidar>
Brb
<tilgovi>
I'm not suggesting you're wrong at all, just that when discussing the service of storing and retrieving comments there's no need at all to tie it to the publisher of the content which is being commented upon
<tilgovi>
whether they happen to be the same entity or not, I'm not sure it simplifies the protocol either way if its scope is limited to just what's needed to store and retrieve these comments
<tilgovi>
I guess what I mean is, publisher redistributing some whitelisted user comments is just a simple discovery and aggregation scheme
<tilgovi>
so, I think we agree
<tilgovi>
but below this could just be the storage
<tilgovi>
such as putting records in IPRS, or pointers in IPNS
<M-davidar>
yeah, so eventually we want aggregation to be completely decentralised
<M-davidar>
hence all the CRDT discussion that goes mostly over my head :p
<ianopolous>
thanks nicolagreco! :-)
martinkl_ has joined #ipfs
<M-davidar>
the main issue that needs to be addressed in either system, is that you don't want every client to have to be searching the entire network for comments to aggregate
f[x] has quit [Ping timeout: 260 seconds]
<M-davidar>
tilgovi: but yeah, in terms of storage, I think putting pointers in IPNS is definitely the way to go, whatever aggregation scheme gets built on top
M-davidar is now known as davidar
<tilgovi>
A tangent: Is there a generally agreed upon way that ipns can refer to URLs rather than objects?
<tilgovi>
The gateway puts ipfs on the Web, can we put the Web on ipns?
<davidar>
what kinds of URLs do you want to point to?
<tilgovi>
Anything. Just curious.
<davidar>
yeah, but it's probably frowned upon :p
<tilgovi>
Understood :)
<davidar>
and, just a slight nitpick
<tilgovi>
sure
<davidar>
the gateway puts ipfs on *HTTP*
<tilgovi>
I'm thinking about the ease of putting a web service on ipfs
<tilgovi>
thanks :)
<davidar>
the web is already on ipfs ;)
<tilgovi>
indeed
<davidar>
depends on what you mean by webservice
jhulten has joined #ipfs
<tilgovi>
http service
<davidar>
generally speaking, we're trying to decentralise web services, without having to rely on a central backend server
<tilgovi>
I'm just thinking about the ease of creating services that are exposed via DNS and IPNS, but maybe that's not sensible.
<davidar>
i mean, of course http services can interact with ipfs (like ipfs.pics, etc), but we're trying to encourage decentralisation wherever possible :)
<jbenet>
tilgovi: yeah you can do that. IPRS is meant to carry whatever records you want, like a more legit DNS TXT.
<jbenet>
so you could have an IPNS ptr to some other path.
<tilgovi>
what kinds of records can IPNS have today?
<tilgovi>
That may not be a valid question. I may mistunderstand the distinction between IPNS and IPRS
<davidar>
ipns just points to an ipfs object (hash)
<davidar>
ipns is like git branch pointers
border0464 has quit [Ping timeout: 252 seconds]
<davidar>
also, I don't think IPRS exists yet (jbenet?)
border0464 has joined #ipfs
<tilgovi>
I need to sleep. :-/.
<tilgovi>
I think I'll check the scrollback in the morning and think more about this.
<tilgovi>
Thanks, davidar!
<davidar>
sure thing, I'll try to write a more coherent reply to your github issue too ;)
tilgovi has quit [Ping timeout: 260 seconds]
captain_morgan has joined #ipfs
<ipfsbot>
[go-ipfs] jbenet deleted goreq-1.5 at 0a0470d: http://git.io/vWHlt
* kandinski
has been discussing mutability on IPFS with a friend today, will read backroll too.
Tv` has quit [Quit: Connection closed for inactivity]
<davidar>
kandinski (IRC): yeah, mutability on ipfs is a lot like git
<davidar>
is probably the most helpful way to think of it
s_kunk has quit [Ping timeout: 250 seconds]
Not_ has joined #ipfs
Not__ has joined #ipfs
<kandinski>
I still haven't read about IPNS
<kandinski>
I got sidetracked reading the Kademlia paper, then by work.
<kandinski>
Our question was whether IPNS names are also self-certified.
<kandinski>
and tonight is the first time I've come across IPRS.
<kpcyrd>
kandinski: if you just want to try it, ipfs name --help
<Stskeeps>
jbenet: do you know of any current people/projects actively working on making ipfs mobile friendly / special dht mode / 'mobile support nodes' that would be able to be delegated to take on duties from a mobile node to make it seem like a full node?
<kandinski>
kpcyrd: thanks
<Stskeeps>
jbenet: also thanks for ipfs, really interesting technology
<jbenet>
Stskeeps that's where we're headed, but not close yet. things like "delegated routing" will come into play here.
<Stskeeps>
studying to use it in mobile devices, hence my interest
<jbenet>
(e.g. a phone delegates to another node)
<jbenet>
Stskeeps we're very interested in supporting this use case-- though we have a bunch of work to do to get there. if you'd like to use it sooner and have a large application in mind, we can see about shifting priorities.
<Stskeeps>
jbenet: let's see; got some crazy ideas
<Stskeeps>
ipfs makes a hell lot more sense than to support 1000th cloud provider in OS'es
<jbenet>
indeed.
<Stskeeps>
any particular ipfs github issues i should track to keep myself up to date on this stuff?
<davidar>
yeah, I'm really looking forward to ipfs on android :)
Not__ has quit [Ping timeout: 250 seconds]
Not_ has quit [Ping timeout: 250 seconds]
<davidar>
i haven't seen any issues to track it yet, just discussion on irc :/
<haadcode>
and also preparing the code to OS it (which was the feedback last week from about everyone)
<rendar>
haadcode: that's cool, how it works? it stores log files in ipfs?
<haadcode>
it stores the message content on ipfs as objects (ipfs object add)
<rendar>
the single line sent by each user?
<haadcode>
it keeps track of messages in a linked list, which is also saved in ipfs
<rendar>
i see
<haadcode>
rendar: eyah, single line is a single object
<haadcode>
right now it uses a server to keep track of the head (hash) of the linked list. this hopefully changes to serverless over time when ipfs has the required features.
<haadcode>
you can drag&drop files to it, and each file or directory is a single message
<gamemanj>
warning: content of picture may contain NSFW words and a good reason for the content holders to start suing people to death
<jbenet>
haadcode are you using IPNS or how do you mode around hash refs?
anticore has joined #ipfs
<haadcode>
jbenet: there's a server atm. for that (client sends an "update channel" request to it, server checks the integrity of the message and updates the channel's head if all good). hopefully in the future can use ipns for that.
<haadcode>
jbenet: but only for the head. the hashes to the actual content are in ipfs objects.
<gamemanj>
so, basically, all you're waiting on is for IPNS to allow you to use an arbitrary keypair :)
<haadcode>
gamemanj: exactly
<haadcode>
also, keystore (for key distribution) and pub/sub (so that one knows when the head was updated)
<haadcode>
but I can see a very valid use case for having a server do the tracking of the lists and performing authentication/authorization
<gamemanj>
well, if you only give people who are supposed to be in there the private key
<haadcode>
that all being said, kudos to the ipfs team on making the files move nicely. it's an amazing feeling when you add a file on one computer and click it to open on another and a video/audio starts streaming almost directly. it's amazing! :)
<gamemanj>
it's P2P magic
<haadcode>
yup
<Stskeeps>
looking forward to ipns with arbitary keypairs too
<rendar>
haadcode: its thanks to the file broken into little chunks
<rendar>
haadcode: so you start quckly to receive the first little chunks and the video/audio starts
<gamemanj>
I'm looking forward to arbitrary keypairs because all sorts of apps can be built on it, entirely decentralized
<gamemanj>
Essentially you can build blockchains on them, and you know how useful they are
<haadcode>
I'm hoping to cleanup the code for that in the next couple of weeks and release most of it as OS. doing this as a side project so things move slow...
<rendar>
hasn't ipns arbitrary key pairs now?
<gamemanj>
rendar: I think the only missing thing is the interface
<haadcode>
gamemanj: me too. that's gonna be the killer feature of ipfs imo.
<rendar>
gamemanj: the ipfs <command> to do that?
<gamemanj>
rendar: yep
<gamemanj>
PublishWithEOL(ctx context.Context, k ci.PrivKey, value path.Path, eol time.Time)
<gamemanj>
this is in namesys/publisher.go
<gamemanj>
notice there are two features that just haven't been interfaced to yet
<gamemanj>
1. arbitrary EOL dates
<gamemanj>
2. Selecting whatever private key you want
<gamemanj>
this is wrapped by the raw Publish function: func (p *ipnsPublisher) Publish(ctx context.Context, k ci.PrivKey, value path.Path)
<gamemanj>
which still accepts a private key
edcryptickiller has joined #ipfs
<gamemanj>
but doesn't accept a date (that's where the arbitrary 24-hour limit is)
<jbenet>
haadcode: thanks -- yeah ipns should help. will discuss strategies for this sort of thing, but i suspect you'll want to make one ipns key per user per channel, so each user publishes to their own message log (aggregating all other messages seen). sort of like how matrix.org does it, i think. then nees a way to discover all other users in the channel, which
<jbenet>
might mean another key that anybody can write to, or pub/sub records, i.e. subscribers to a pub/sub channel
<rendar>
well, am i wrong to imagine ipns names (or "strings") like git's refs?
<gamemanj>
I imagine them that way...
<gamemanj>
but I'm not a dev, just someone who looks at the code trying to see if what I consider missing is possible :)
edcryptickiller has left #ipfs [#ipfs]
<jbenet>
rendar: they're git branches, but just not pretty-named.
<rendar>
jbenet: oh, i see
<haadcode>
jbenet: something like that yeah. was thinking one ipns key per channel. not sure yet how to do the presence part (pub/sub is prolly needed).
<rendar>
but how pub/sub will work? you publish something with ipfs <pub> <data> and that data will be submitted to who? every ipfs servers connected?
Encrypt has quit [Quit: Quitte]
<haadcode>
jbenet: tbh, I'm mostly waiting for the keystore atm. ipns would be nice but I can work on that around the use case of server being the single source of authority for access, so ipns can come later.
atgnag has quit [Read error: Connection reset by peer]
<haadcode>
jbenet: right now the messages are encrypted but the key distribution is a always a pain in server-client model, so I'm really curios to see how it can be solved with ipfs keystore.
<gamemanj>
a keystore is a tad problematic, because it means you have to have all the keys you want for all your apps in the keystore...
<gamemanj>
if an app adds a key to the keystore, then if other apps could access it, it becomes a problem
<gamemanj>
doesn't even matter if the other apps could know the private key or not, if it's usable by the other apps, you now have a problem.
<haadcode>
true
<haadcode>
I'm sure there's going to be a smart solution for that, too
<gamemanj>
thus, the keystore should not be usable from the API. The user could use the keystore for convenience, but apps should have to do things with raw private keys (presumably embedded into the app or given by the user)
atgnag has joined #ipfs
anticore has quit [Read error: Connection reset by peer]
Guest73396 has quit [Ping timeout: 244 seconds]
<haadcode>
thanks for the feedback ya'll. makes me motivated to release this.
Oatmeal has quit [Read error: Connection reset by peer]
<AlexPGP>
gamemanj: thanks for the link. did the 'ipfs --json ...' dance
<AlexPGP>
all works
jhulten has quit [Ping timeout: 250 seconds]
<AlexPGP>
question: what is best way to "publish" files?
bsm1175321 has quit [Remote host closed the connection]
<AlexPGP>
(I dropped some pix into Files and see their hashes, but how do I make the outside world aware of the existence of such files?)
compleatang has quit [Ping timeout: 252 seconds]
deawar has joined #ipfs
deawar has quit [Client Quit]
compleatang has joined #ipfs
deawar has joined #ipfs
SuperN00B has joined #ipfs
<kpcyrd>
AlexPGP: link them ;)
domanic has joined #ipfs
f[x] has joined #ipfs
<SuperN00B>
so as my nick implies i got issues
<SuperN00B>
can I ask some basic questions about how to config IPFS?
<ion>
Sure, no need to ask to ask.
<dignifiedquire>
SuperN00B: that’s one of the main ideas for this channel :)
<SuperN00B>
working on getting rasp pi 2 up but having issues
<SuperN00B>
ERROR: ENV variable $Editor not set
<ion>
What prints that?
<SuperN00B>
command: ipfs config edit
<ion>
Huh, weird. It tries to use $EDITOR here.
<SuperN00B>
so how do I set the $EDITOR variable?
Matoro has quit [Ping timeout: 255 seconds]
<SuperN00B>
BTW using ipfs version 0.3.9-dev
<SuperN00B>
I posted an issue stating that when I started ipfs daemon it only tried ipv6 addresses
elima has quit [Ping timeout: 260 seconds]
<SuperN00B>
juan told me to edit the config and now I am wondering iso I am sorta stuck.
<ion>
“EDITOR=nano ipfs config edit” seems to work when the daemon is not running, but it seems to try to look for EDITOR in the daemon’s environment when the daemon is running.
<SuperN00B>
so sudo ps -A|grep ipfs returns nothing
<ion>
Try running that command then.
<SuperN00B>
you mean"EDITOR=nano ipfs config edit"
<ion>
yes
<ion>
Heh, if you have the daemon running, “ipfs config edit” will try to start the editor in the terminal in which the daemon is running (if any).
<ion>
That’s probably a bug.
bsm1175321 has joined #ipfs
<SuperN00B>
this is what ai tried- :sudo $EDITOR=nano ipfs config edit
Matoro has joined #ipfs
<ion>
Please don’t use sudo here, ipfs should run as your normal user.
<ion>
Also you should not add a $ when setting a variable.
<SuperN00B>
ok
<SuperN00B>
recheck that nick....told ya I had issues
<SuperN00B>
damn yeah!!
<SuperN00B>
Thank you that was simple enough
<SuperN00B>
ion should I attempt to simply comment out the ipv6 stuff or remove it completely?
jhulten has joined #ipfs
domanic has quit [Ping timeout: 240 seconds]
mildred has quit [Ping timeout: 240 seconds]
SuperN00B has quit [Quit: bye]
fingertoe has quit [Ping timeout: 246 seconds]
<whyrusleeping>
huh, i was unaware ipfs config edit was a thing
fingertoe has joined #ipfs
Matoro has quit [Ping timeout: 260 seconds]
<AlexPGP>
So, is it the case that, once I deposit a photo (or any other file) in my 'Files' page, the hash and its location (on my machine) is communicated to the, um, swarm?
<dignifiedquire>
whyrusleeping: are we getting 0.3.9 today?
<AlexPGP>
Or is there something else I must do?
<dignifiedquire>
(I heard some rumours
<whyrusleeping>
dignifiedquire: i certainly hope so
<whyrusleeping>
theres two more PRs i want in before i'll cut that
<whyrusleeping>
entirely bugfixes
<whyrusleeping>
no features
<dignifiedquire>
cool :)
<whyrusleeping>
dignifiedquire: do you think you could help test some webui things?
<dignifiedquire>
sure
<whyrusleeping>
i want to make sure the webui works fine on latest master before saying 0.3.9 can ship
<dignifiedquire>
also is ndjson in or not? because if it is in I the node-api will break entirely I think which in turn will break the webui?
hoboprimate has joined #ipfs
domanic has joined #ipfs
<whyrusleeping>
'ndjson' has been in for a while, minus the whole newline delimited bit
<whyrusleeping>
but you should be able to set stream-output to false
<whyrusleeping>
(or whatever that option was)
<dignifiedquire>
stream-channels I think
<whyrusleeping>
hrm, looks like we ignore stream channels
<dignifiedquire>
:(
<whyrusleeping>
let me see if i can make the option respected again...
<dignifiedquire>
I can just update node-ipfs-api to understand ndjson
<dignifiedquire>
and then we cut a new release which we use in the webui and everybody is happy
<dignifiedquire>
(hopefully ;)
<whyrusleeping>
if you can get that done today i would be so happy
SuperN00B has joined #ipfs
<SuperN00B>
whyrusleeping when you said to remove from swarm did u mean comment out or delete?
<SuperN00B>
Sorry no context--> ipv6 only when starting up raspberry pi
<whyrusleeping>
oh, delete
<whyrusleeping>
its json so we cant comment :/
<SuperN00B>
so when I do that do I need to recompile the config?
<SuperN00B>
getting Error: Failure to decode config: invalid character '}' looking for beginning of object key string
<dignifiedquire>
whyrusleeping: will make branch now, there is an “offical” ndjson parser out there which I will try to use
<whyrusleeping>
superN00B: invalid json
<whyrusleeping>
make sure you dont have a trailing comma before the line you removed
<SuperN00B>
got it
<dignifiedquire>
whyrusleeping: not sure how to test it though as the api relies on the npm package for go-ipfs and node-ipfs-ctl any ideas on how to make them use 0.9?
<dignifiedquire>
*0.3.9
<dignifiedquire>
daviddias: are you around?
<SuperN00B>
so this {
<SuperN00B>
"API": {
<SuperN00B>
"HTTPHeaders": null
<SuperN00B>
},
<SuperN00B>
"Addresses": {
<SuperN00B>
"API": "/ip4/127.0.0.1/tcp/5001",
<SuperN00B>
"Gateway": "/ip4/127.0.0.1/tcp/8080",
<SuperN00B>
"Swarm": [
<SuperN00B>
"/ip4/0.0.0.0/tcp/4001"
<SuperN00B>
],
<SuperN00B>
},
<SuperN00B>
"Bootstrap": [
<SuperN00B>
is what i have for the swarm
<SuperN00B>
code below bootstrap cut
Guest73396 has quit [Ping timeout: 268 seconds]
akkad has joined #ipfs
<demize>
Trailing comma after swarm.
<SuperN00B>
before the "]"
<SuperN00B>
rather after?
<demize>
After the ].
<SuperN00B>
thanks
border0464 has quit [Ping timeout: 252 seconds]
<SuperN00B>
Error: Failure to decode config: invalid character ']' looking for beginning of value
<dignifiedquire>
whyrusleeping: any ideas why the hashes would be different when using 0.3.9 instead of 0.37?
<SuperN00B>
Thank you Cryptix
pfraze has joined #ipfs
<dignifiedquire>
whyrusleeping: nevermind seems a bug in the response parsing
amstocker_ has joined #ipfs
martinBrown has quit [Quit: -]
<dignifiedquire>
whyrusleeping: interesting it seems the response to “ipfs add -r folder” changed in 0.3.9 when called from the api, it appends one last entry which has an empty name
<SuperN00B>
going for lunch thank you you bunch of super geeks( said with a warm smile and a gleem in his eye)
martinBrown has joined #ipfs
SuperN00B has quit [Quit: bye]
<stoopkid>
ok so, if i'm making a wiki, and i have all the pages in the wiki under a folder, i can add the folder to my node and it will keep relative links to the files, so if the file changes, the hash will change but the relative link does not (if i just re-add my folder), and i can have a permanent URL to my IPFS-wiki by using IPNS
<stoopkid>
now, if the wiki grows larger than can be maintained on a single node, then i'll have to distribute the storage across multiple nodes
<stoopkid>
assuming these nodes weren't going to change, then IPNS can still allow for permanent URLs
poka has quit [Quit: more xen bugs wheeee]
<stoopkid>
(some of the permanent URLs would use one node, some would use another, there can be some redundancy, etc..)
<dignifiedquire>
whyrusleeping: if I fix that the webui is working without any changes to node-ipfs-api (I don’t know why though I was sure it would break)
<stoopkid>
but now, if the nodes that are storing different parts of the wiki change from time to time, then it seems the 'permanent' URL has to change accordingly
Matoro has joined #ipfs
<dignifiedquire>
whyrusleeping: afk for a bit ping me if you need more details
<whyrusleeping>
dignifiedquire: wait, and extra entry from add?
<stoopkid>
like, if i move my storage from node A to node B, then the IPNS URL will change from A's peer-id to B's peer-id
water_resistant has joined #ipfs
atrapado has joined #ipfs
<stoopkid>
is there a way to get URL-permanence even across node-changes?
<pinbot>
now pinning /ipfs/QmSd6ZGcdQDGbso1bCT5qGb1u1KB7z9eYUXaUU9XkcEW5Q
amstocker_ has quit [Ping timeout: 265 seconds]
rotula has joined #ipfs
sonatagreen has quit [Ping timeout: 250 seconds]
<whyrusleeping>
dignifiedquire: sweet, thanks
pinbot has quit [Ping timeout: 252 seconds]
pinbot has joined #ipfs
* cryptix
hides
domanic has quit [Ping timeout: 256 seconds]
Oatmeal has joined #ipfs
compleatang has quit [Ping timeout: 256 seconds]
s_kunk has quit [Ping timeout: 246 seconds]
compleatang has joined #ipfs
mvr_ has quit [Quit: Connection closed for inactivity]
hoboprimate has quit [Remote host closed the connection]
cemerick has joined #ipfs
hoboprimate has joined #ipfs
hoboprimate has quit [Remote host closed the connection]
hoboprimate has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to feat/ipns-cache: http://git.io/vWd8u
<ipfsbot>
go-ipfs/feat/ipns-cache 5ea8cb4 Jeromy: add in ttl option to ipfs name publish...
mvr_ has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed feat/ipns-cache from 5ea8cb4 to 73e3437: http://git.io/vWWp5
<ipfsbot>
go-ipfs/feat/ipns-cache 73e3437 Jeromy: add in ttl option to ipfs name publish...
Matoro has quit [Ping timeout: 260 seconds]
f[x] has quit [Ping timeout: 260 seconds]
simonv3 has joined #ipfs
_jkh_ has quit [Changing host]
_jkh_ has joined #ipfs
pfraze has quit [Remote host closed the connection]
Encrypt has quit [Quit: Quitte]
ianopolous has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
Matoro has joined #ipfs
rotula has quit [Remote host closed the connection]
Syun has joined #ipfs
<ion>
whyrusleeping: Awesome, thanks
<whyrusleeping>
ion: i dont know if you saw the --nocache option as well
martink__ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<ion>
whyrusleeping: I did, but it ought to be the publisher, not the viewer, that chooses the appropriate caching behavior.
Eudaimonstro has joined #ipfs
thomasreggi_ has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
right, the publisher can set their preference for caching behaviour
<whyrusleeping>
but the viewer ultimately gets to make the decision
<whyrusleeping>
for example, if you set the ttl to 1000 years, i dont want to be stuck holding that record for that long
<akkad>
Tpwd
<whyrusleeping>
so i have it set now to min(userCacheVal, publisherCacheTTL, recordEOL)
martinkl_ has joined #ipfs
<ion>
whyrusleeping: Absolutely. The TTL is only the maximum limit on how long the resolver is allowed to keep a potentially stale record.
<whyrusleeping>
alright, cool. same page
cemerick has quit [Ping timeout: 250 seconds]
<whyrusleeping>
now just got to get it merged...
<whyrusleeping>
i might try to ship 0.3.9 tonight with bugfixes, and then put the ipns cache stuff after that
<ion>
Cool
<whyrusleeping>
or maybe the cache stuff is a 'bugfix' ?
<whyrusleeping>
hmm
<whyrusleeping>
no... it smells like a feature because of all the options and cli stuff changed
f[x] has joined #ipfs
<achin>
smells like PROGRESS
<ion>
whyrusleeping: I have been meaning to see whether I could manage to make the gateway serve the appropriate Cache-Control/max-age and ETag headers for IPNS paths. I still haven't got around to going through a Go tutorial but perhaps I could figure it out along the way.
<ion>
Those headers would make the user experience of loading IPNS sites through the gateways better.
fingertoe has joined #ipfs
<achin>
i think this was discussed a day or two ago, but was it agreed that "ipfs add" should gain a --no-pin option to prevent the automattic pinning?
<whyrusleeping>
ion: would setting the Cache-Control header to 1 minute for now work?
<whyrusleeping>
er, i guess i would make it respect the config value
<whyrusleeping>
but it would take a good chunk of work to make it get the ttl from the record object itself
voxelot has quit [Ping timeout: 246 seconds]
s_kunk has joined #ipfs
<whyrusleeping>
since the gateway just calls 'resolve'
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
<Stskeeps>
so, that Qmsomething hash a IPNS name has is trivially derivable from public key?
<whyrusleeping>
yeap
<richardlitt>
The IPFS paper says Kademlia provides "Efficient lookup through massive networks: queries on average contact ⌈log2(n)⌉ nodes. (e.g. 20 hops for a network of 10, 000, 000 nodes)."
<richardlitt>
But 2^20 is only 1,000,000.
<Stskeeps>
it's like a hash of the public key or something i didn't realize
<Stskeeps>
?
patcon has joined #ipfs
<whyrusleeping>
Stskeeps: yeah, its the sha256 of the public key
<Stskeeps>
ok
Matoro has quit [Ping timeout: 260 seconds]
<whyrusleeping>
in multihash format
<whyrusleeping>
richardlitt: yeah, but log2(10,000,000) == ~22
<whyrusleeping>
23
<richardlitt>
So, 20 is a generalization
<whyrusleeping>
yeah
<tilgovi>
I was just re--reading that all last night.
<tilgovi>
I was wondering whether it's clear enough to the reader that the network is abstract there.
<tilgovi>
Because, 23 peer hops might be many more IP hops.
<richardlitt>
tilgovi: tbh, I've got no real idea what it means
<richardlitt>
I've spent the past 20 mins reading up on Big O notation and logx stuff
<tilgovi>
It means that packets travel through ~23 nodes no average to go between nodes within the system.
<richardlitt>
I'm not sure what average contact means, either.
<tilgovi>
If you and I have a node each, and I want to route a packet from mine to yours, it will go through an average of ~23 nodes on the way
<richardlitt>
If I go from Node A to node W in 23 nodes, isn't going from Node A to Node B still just one node?
<richardlitt>
One node hop, I mean
<tilgovi>
What I'm pointing out is that each "hop" is between two nodes, but if those are connected by an IP network, that may traverse many other machines that aren't part of IPFS, just routers on the internet.
anticore has joined #ipfs
<richardlitt>
OK
<tilgovi>
23 IPFS nodes would be involved in the routing, or finding of a path, between Node A and Node B.
<richardlitt>
Cool.
<richardlitt>
I still think it should be 1 million, not 10 million, for that example
<tilgovi>
I think you may be right.
<richardlitt>
Because log2(1000000) = 19.93
anticore has quit [Remote host closed the connection]
<richardlitt>
Thanks tilgovi
<richardlitt>
also, hi! :D
<tilgovi>
hi :)
<richardlitt>
You still working for hypothes.is in SF?
<tilgovi>
nope
<tilgovi>
Still in Oakland, not currently employed.
<richardlitt>
word
infinity0 has quit [Ping timeout: 244 seconds]
<tilgovi>
Working on annotator and related projects still, though.
<richardlitt>
Cool
<richardlitt>
I moved to Boston, but funding ran out for Beagle and I've been focusing on personal projects and life since. Started contributing more here recently. Currently (as in, right now) in the MIT Media Lab working on some scientific metaknowledge research.
voxelot has quit [Ping timeout: 256 seconds]
<richardlitt>
Keep meaning to learn more about CouchDB and fix Beagle to the point where it is useable.
infinity0 has joined #ipfs
Matoro has joined #ipfs
<richardlitt>
Good luck with the projects. Saw you reopened work on your dom anchoring library.
<achin>
anybody have a moment to help out a go-newbie with building ipfs from source?
<achin>
i have a bunch of 'cannot find package' errors when running "make install" from $GOPATH/src/github.com/ipfs/go-ipfs/
<tilgovi>
It's a framework for applications that manage DOM overlays, I guess.
infinity0 has quit [Remote host closed the connection]
<revolve>
that's awesome, tilgovi
<tilgovi>
I'm working to isolate the components more and expose more framework hooks that really capture the lifecycle of what I think browser annotation involves: selection, anchor identification, storage
<tilgovi>
and display, of course
<revolve>
tilgovi: I've been working on a p2p caching proxy that facilitates realtime collaborating on DOM trees
infinity0 has joined #ipfs
<revolve>
could you see annotator being used in a sort of etherpad for web pages?
<revolve>
thanks for reminding me about hypothes.is, man
Vaul has joined #ipfs
border0464 has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<tilgovi>
no problem :)
<Stskeeps>
after 'getting' how IPNS actually works, i have to say that's damn smart
<ion>
whyrusleeping: Could “resolve” perhaps have a variant that also returns a TTL value? For records with a TTL longer than 1 minute, having a max-age header of 1 minute would be better than nothing. For records with a short TTL, a max-age of 1 minute could interfere with the intended operation (but I realize that pubsub is going to cover that use case in the future).
mungojelly has quit [Remote host closed the connection]
martinkl_ has joined #ipfs
hellertime has quit [Ping timeout: 240 seconds]
NeoTeo has quit [Quit: ZZZzzz…]
<ion>
richardlitt: For algorithms, O(f(n)) means that there exists some constant c (and a lower bound n₀) such that c·f(n) ≥ the resource (time, memory etc.) consumption of the algorithm for all input sizes n (≥ n₀).
wscott has joined #ipfs
<ipfsbot>
[node-ipfs-api] diasdavid closed pull request #81: Make solid tests for node-ipfs-api (master...solid-tests) http://git.io/vWsES
<richardlitt>
ion: mind if I ask some questions?
<ion>
whyrusleeping: Resolving /ipns/<hostname> should probably also note the DNS TTL, and if /ipns/<hostname> points to /ipns/<keyhash>, the max-age should probably be the minimum of the TTLs.
<ion>
richardlitt: Go ahead
<richardlitt>
ion: What do you mean by constant?
<richardlitt>
ion: What does N_0 mean?
ashark has quit [Ping timeout: 272 seconds]
chriscool has joined #ipfs
Oatmeal has quit [Ping timeout: 256 seconds]
<achin>
constant here means that 'c' doesn't change while n is changing
Oatmeal has joined #ipfs
<richardlitt>
is · multiply in this notation?
dd0 has quit [Ping timeout: 255 seconds]
<ion>
richardlitt: Say, i have an algorithm that consumes 1000 CPU cycles at start plus 100·n² cycles for the number of input items n. Given a large n, the 100·n² dominates, making the initial 1000 cycles negligible. So we could arbitrarily pick e.g. c = 110, n₀ = 10. For all n ≥ 10, 110·n² ≥ the runtime of the algorithm, thus the algorithm has time complexity O(n²).
<ion>
Yeah, it’s multiplication.
<achin>
your non-ascii is not rendering well for me, ion :(
<tilgovi>
The n₀ bit is about the existence of a lower bound after which the work done proportional to input size is most of the work, dominates whatever constant multiplier or initial, static cost
<ion>
achin: Aww. Try clicking on the botbot link in the topic.
<tilgovi>
and c is that constant
<tilgovi>
I wonder if it shouldn't be two contants in this explanation, a + bf(n)
<richardlitt>
OK
<richardlitt>
So, that's what O(f(n)) is
<richardlitt>
How does that improve my knowledge of Kademlia or IPFS? (sorry if this is blunt)
<richardlitt>
(I'm not sure how knowing O(f(n)) helps me)
<richardlitt>
(also, I think I got it, ion, thanks)
<richardlitt>
How am I supposed to pronounce O(f(n))? Oh-function?
<tilgovi>
O(f(n)) is the resource consumption ignoring fixed and proportional costs, considering only the "complexity" of the relationship of cost to input size
<tilgovi>
I've always heard "Big Oh"
<richardlitt>
ahhhh
<ion>
f(n) is pronounced “f of n”; O(x) is typically pronounced “big oh x”, i guess. Or “big of oh x”? “Order x”?
<richardlitt>
so basically, it's saying "Let's ignore the fact that <insert random company> has <random number> or more users than us, how do our algorithms compare to theirs?"
<ion>
s/of oh/oh of/
<tilgovi>
"order" yeah
<tilgovi>
that too
<tilgovi>
richardlitt, yes
<richardlitt>
or any other variables really
<richardlitt>
cool. Ok, sorry, was having difficulty applying to real world situation
<tilgovi>
don't care that their machines are faster (some fixed multiplier on the time the algorithm takes) or that they use an interpereted language (some parse time on startup, perhaps?), just how complex is the algorithm
<tilgovi>
bbiab
nessence has quit [Remote host closed the connection]
nessence has joined #ipfs
simonv3 has quit [Quit: Connection closed for inactivity]
<ipfsbot>
[node-ipfs-api] diasdavid tagged v2.5.0 at a8ec517: http://git.io/vWF0i
chriscool has quit [Read error: Connection reset by peer]
<achin>
if i have a hash that is indirectly pinned, is there an easy way to see why?
chriscool has joined #ipfs
<jbenet>
hey tilgovi: in kademlia you wouldn't route through 23 _hops_, you would have a rough max of 23 peers to talk to to find a _direct connection_ to the end peer.
<jbenet>
(i.e. it's an overlay network and you try to talk directly if you can-- you may not be able to, so yeah there's relay possibilities. and then there's turning the whole thing into a packet switched network, but that's separate
<wscott>
ion, you have to do raw socket dgrams and parse it yourself. (Or add it to the net library)
TheWhisper has quit [Quit: Leaving]
<ion>
whyrusleeping: Perhaps resolveOnce could return a TTL value, and resolve could start at infinity and compute the minimum of the previous TTL value and whatever was returned by resolveOnce at each iteration.
<ion>
whyrusleeping: Given that it seems to be a pain to get a TTL value for a DNS lookup, perhaps make a reasonable assumption like 1 minute for it. DNS can still point to /ipns/<keyhash> which can lower the TTL according to the iteration above.
ashark has joined #ipfs
NeoTeo has joined #ipfs
atrapado has quit [Quit: Leaving]
<tilgovi>
ion: what is this for?
rendar has joined #ipfs
<ion>
tilgovi: Computing a Cache-Control/max-age header for IPNS queries on the HTTP gateway.
<tilgovi>
If you're looking up things in DNS you could just not cache them. Let the host network stack do DNS caching.
<tilgovi>
ahhh
<tilgovi>
I see
<tilgovi>
never mind then
<achin>
this is the best i could come up with: cat localrefs | xargs -n1 ipfs refs -e |grep hash
<achin>
however, it appears to be slow as potatoes
cemerick has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed feat/utp from f1dbbed to 9b76adc: http://git.io/vcXyV
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/windows-builds from a838b6b to e6121db: http://git.io/vWq8k
<ipfsbot>
go-ipfs/fix/windows-builds 5cab903 Jeromy: force godeps to save windows import...
<ipfsbot>
go-ipfs/fix/windows-builds d44038a Jeromy: fix path creation so it works on windows...
<ipfsbot>
go-ipfs/fix/windows-builds 0726628 Jeromy: skip cli parse tests on windows due to no stdin...
ygrek_ has joined #ipfs
TheWhisper has joined #ipfs
f[x] has quit [Ping timeout: 260 seconds]
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/windows-builds from e6121db to 89787b9: http://git.io/vWq8k
<ipfsbot>
go-ipfs/fix/windows-builds 2b93700 Jeromy: fix path creation so it works on windows...
<ipfsbot>
go-ipfs/fix/windows-builds 4405987 Jeromy: skip cli parse tests on windows due to no stdin...
<ipfsbot>
go-ipfs/fix/windows-builds 89787b9 Jeromy: add check to makefile to ensure windows builds dont fail silently...
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/windows-builds from 89787b9 to 8d63402: http://git.io/vWq8k
<ipfsbot>
go-ipfs/fix/windows-builds 8d63402 Jeromy: add check to makefile to ensure windows builds dont fail silently...
<ipfsbot>
[go-ipfs] eminence opened pull request #1909: Add a --no-pin option to 'ipfs add' (master...addnopin) http://git.io/vWF1h
nessence has joined #ipfs
<achin>
i made a thing! go is weird
<ion>
nice
<revolve>
whadidya make?
<ion>
<ipfsbot> [go-ipfs] eminence opened pull request #1909: Add a --no-pin option to 'ipfs add' (master...addnopin) http://git.io/vWF1h
<revolve>
eminence == achin?
<ion>
yes
* achin
is me
<revolve>
cool
<achin>
apparently if something starts with a lowercase letter, it is unexported. to export it, you have to give it a capital letter
<ion>
Interesting
<achin>
so when you change something from private to public, you have to change every single use of that thing? i must be missing something, because that doesn't seem right
nicolagreco has quit [Remote host closed the connection]
<ion>
Doesn’t seem to me like you would need to do that often, and when you do, it doesn’t seem like much of a difficulty given a reasonable editor.
<achin>
perhaps
anticore has joined #ipfs
compleatang has quit [Ping timeout: 255 seconds]
<whyrusleeping>
that is correct
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
nessence has quit [Read error: Connection reset by peer]
martinkl_ has joined #ipfs
<whyrusleeping>
achin: hrm, i'm stuck in a quandary.
nessence has joined #ipfs
<achin>
do tell
gamemanj has quit [Ping timeout: 260 seconds]
<hoboprimate>
I get missing font in webui, log says: Path Resolve error: no link named "fontawesome-webfont.woff2" under QmWjympWW8hpP5Jgfu66ZrqTMykUBjGbFQF2XgfChau5NZ module=core/server
<kpcyrd>
--no-pin would be nice
<victorbjelkholm>
also the compiler helps you to spot all errors with unexported/exported so is pretty trivial to change
<victorbjelkholm>
wow, when I suggested that change, node-ipfs and node-ipfs-api was the repos I had in mind, did not know about the scope of the change... That's a long list
<daviddias>
well, node-ipfs is broken down in tiny modules :)
<victorbjelkholm>
yeah, and that's a good thing
screensaver has joined #ipfs
Oatmeal has quit [Write error: Connection reset by peer]
dignifiedquire has joined #ipfs
dignifiedquire has quit [Remote host closed the connection]
<dignifiedquire>
daviddias, on that note of lots of modules maybe it makes sense for all the ipfs js modules to have their own github org? as they are a bit all over the place atm
martinkl_ has joined #ipfs
Xe is now known as \section{Xe}
<victorbjelkholm>
everything should probably be under ipfs org, no?
<dignifiedquire>
not sure that org will get pretty crowded that way
Oatmeal has joined #ipfs
<dignifiedquire>
whyrusleeping, any luck with those tests?
cboddy has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid pushed 1 new commit to rn: http://git.io/vWbft
<ipfsbot>
js-ipfs/rn ae3f688 David Dias: update README
\section{Xe} is now known as Xe
ashark has quit [Ping timeout: 240 seconds]
martink__ has joined #ipfs
martink__ has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
martinkl_ has quit [Ping timeout: 252 seconds]
nonmoose has quit [Ping timeout: 250 seconds]
martinkl_ has joined #ipfs
Vaul has quit [Remote host closed the connection]
dignifiedquire has quit [Remote host closed the connection]
cboddy has quit [Remote host closed the connection]
dignifiedquire has joined #ipfs
<victorbjelkholm>
daviddias, ah, the description of the repo too, heh!
<dignifiedquire>
also url in package.json
compleatang has quit [Ping timeout: 260 seconds]
<whyrusleeping>
dignifiedquire: sorry, been really busy :/
<dignifiedquire>
whyrusleeping, no need to be sorry, just was curious if my instructions sent you on a path to hell
edsilv has joined #ipfs
edsilv has quit [Client Quit]
compleatang has joined #ipfs
patcon has joined #ipfs
infinity0 has quit [Remote host closed the connection]
dignifiedquire has quit [Remote host closed the connection]
infinity0 has joined #ipfs
nonmoose has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<hoboprimate>
I had an idea for a icon for ipfs which relates to the space theme: A glass sphere with a distributed network inside of stars
<hoboprimate>
with lines conecting them
<uhhyeahbret>
could be cool :)
dignifiedquire has joined #ipfs
ygrek_ has joined #ipfs
<hoboprimate>
was thinking of doing on blender, only have to learn it, which will take some time.
<uhhyeahbret>
it would be a good project to learn blender with
<hoboprimate>
yeah
<uhhyeahbret>
make it happen!
<hoboprimate>
right-yo
<hoboprimate>
:)
<uhhyeahbret>
worst case scenario, you learn blender
<hoboprimate>
true :)
zen|merge_ has quit [Quit: No Ping reply in 180 seconds.]
<dignifiedquire>
daviddias, jbenet any more thoughts on that component chat?
<cryptix>
i noticed that the mouse only switches to the text cursor on the first line. does the <input> shrink depnding on line count?
<stoopkid>
hi, does anybody know off-hand how to change which port IPFS uses?
<cryptix>
kyledrake: 8 used... :P
<cryptix>
somebody has
<victorbjelkholm>
cryptix, oh, yeah. The editor I use have some stupid behavior, it only exists on the rows where you have text. That might be the issue
<victorbjelkholm>
stoopkid, says when you start the daemon
<cryptix>
stoopkid: which one?
<cryptix>
stoopkid: there is warm, :8080 gw or api