horrified has quit [Remote host closed the connection]
horrified has joined #ipfs
robattila256 has quit [Ping timeout: 246 seconds]
<lgierth>
sorry bout that
<lgierth>
you can always ping the ops and voicers
<alterego>
Weird, so a node's size is the accumulation of all it's children.
<lgierth>
the size of a link to a node is the accumulation of that node and all its children
<alterego>
Yeah
<alterego>
Kinda weird
ZaZ has quit [Read error: Connection reset by peer]
<alterego>
So my node with revision control, will have it's size grow exponentially.
<alterego>
You should be able to set size to 0, in links, when it's not relevant.
<lgierth>
you can, there's just not any api for it i think
<lgierth>
you can construct a link with Size:0 though
<alterego>
Yeah, you can. Except in js-ipfs-api :D
<alterego>
IT complains if the size is < 1 :D
<alterego>
I should probably submit an issue for that maybe.
<lgierth>
there might be checks like that in go-ipfs too i guess
<lgierth>
yeah let's discuss what we can do in an issue
<lgierth>
another point for example is whether a link's size should take deduplication into account
<alterego>
If there are, they were added recently. Because I used to create zero sized links with the go api loads.
<alterego>
(Working on a port of some go code of mine to js)
<lgierth>
ah interesting
<alterego>
At least now I am hitting an exception being created in js-ipfs-api before it calls the api.
<alterego>
tbh, having size in links never really sat well with me, I understand it makes it easier for doing things like, directory listings, but from a pure DAG perspective .. I don't like it, wish it was optional.
<lgierth>
:)
testeslok has joined #ipfs
<testeslok>
hi @all, I have a question. is it possible to reset your peer id?
<alterego>
testeslok: do you want to keep any pinned data? :)
<testeslok>
No
<testeslok>
just a reset :-)
<alterego>
testeslok: then just: rm -rf ~/.ipfs
slothbag has joined #ipfs
<alterego>
testeslok: and rerun: ipfs init
<testeslok>
thanks so much!
<alterego>
np
<testeslok>
I appreciate it!
<testeslok>
Have a good one!
slothbag has left #ipfs [#ipfs]
testeslok has quit [Client Quit]
pfrazee has quit [Read error: Connection reset by peer]
wallacoloo_____ has joined #ipfs
pfrazee has joined #ipfs
<dignifiedquire>
lgierth: can you op me again? it seems to loose my status all the time
<lgierth>
dignifiedquire: jbenet and whyrusleeping can give you op with chanserv
<dignifiedquire>
ta
<pinbot>
now pinning /ipfs/QmXbiPRtsZ5y85tW9wFBZNaPcg9j9dXDTu8174Y97UuWUB
infinity0 has quit [Remote host closed the connection]
utanapishtim has joined #ipfs
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
john1 has joined #ipfs
utanapishtim has quit [Ping timeout: 265 seconds]
Furao has joined #ipfs
utanapishtim has joined #ipfs
<Furao>
Hey! anyone with OrbitDB experience? I just wonder: if I use an EventStore, is it supposed to be persistent? if some entries are added, process exit. is it possible to retrieve those events later in a separated process?
pfrazee has quit [Read error: Connection reset by peer]
MDude has quit [Ping timeout: 250 seconds]
utanapishtim has quit [Quit: leaving]
MDude has joined #ipfs
brimston3 has quit [Changing host]
brimston3 has joined #ipfs
brimston3 is now known as brimstone
Sophrosyne has quit [Ping timeout: 250 seconds]
bren2010_ has quit [Quit: ZNC 1.6.1 - http://znc.in]
pfrazee has joined #ipfs
bren2010 has joined #ipfs
pfrazee_ has joined #ipfs
pfrazee has quit [Read error: Connection reset by peer]
<mib_kd743naq>
lgierth: are you at 33c3 ? want to chat about the linksize requirement a bit, seems like a real step backwards to make it a hard requirement ( easier to explain "why" in person )
_rht has joined #ipfs
<mib_kd743naq>
( or anyone else really... whyrusleeping perhaps? :)
mib_kd743naq has quit [Quit: Page closed]
ckwaldon has joined #ipfs
ylp has joined #ipfs
ckwaldon1 has joined #ipfs
ckwaldon has quit [Ping timeout: 250 seconds]
ckwaldon1 is now known as ckwaldon
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
ZaZ has joined #ipfs
ckwaldon has quit [Ping timeout: 265 seconds]
ckwaldon has joined #ipfs
ygrek_ has quit [Ping timeout: 246 seconds]
infinity0 has quit [Remote host closed the connection]
ckwaldon1 has joined #ipfs
ckwaldon has quit [Ping timeout: 256 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Changing host]
infinity0 has joined #ipfs
ckwaldon has joined #ipfs
cemerick has joined #ipfs
ckwaldon1 has quit [Ping timeout: 268 seconds]
ckwaldon1 has joined #ipfs
ckwaldon has quit [Ping timeout: 250 seconds]
ckwaldon1 is now known as ckwaldon
ckwaldon1 has joined #ipfs
ckwaldon has quit [Ping timeout: 246 seconds]
ckwaldon has joined #ipfs
ckwaldon1 has quit [Ping timeout: 260 seconds]
ckwaldon has quit [Ping timeout: 248 seconds]
ckwaldon has joined #ipfs
ckwaldon1 has joined #ipfs
ckwaldon has quit [Ping timeout: 258 seconds]
jchevalay has joined #ipfs
<jchevalay>
hi all
ckwaldon1 has quit [Ping timeout: 256 seconds]
chovy[m] has left #ipfs ["User left"]
<jchevalay>
haad is here ?
ckwaldon has joined #ipfs
cemerick has quit [Ping timeout: 260 seconds]
ckwaldon has quit [Ping timeout: 246 seconds]
<whyrusleeping>
i'm at 33c3
ckwaldon has joined #ipfs
ckwaldon1 has joined #ipfs
yoshuawuyts has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
we need to use the deadlines in that code
<Kubuxu>
yeah, the writeMessage needs to be with context
<whyrusleeping>
that too, but networked writes can't respect a context yet
<whyrusleeping>
so we just need a basic timeout, at least for now
ckwaldon1 has joined #ipfs
ckwaldon has quit [Ping timeout: 258 seconds]
ckwaldon1 is now known as ckwaldon
<whyrusleeping>
Kubuxu: wanna throw up a PR to address this?
<whyrusleeping>
(the root of the problem here is that yamux is broken)
<dignifiedquire>
is there a multiplexer that isn't? Oo
ckwaldon1 has joined #ipfs
ckwaldon has quit [Ping timeout: 256 seconds]
ckwaldon1 is now known as ckwaldon
<dignifiedquire>
whyrusleeping: multiplex worked pretty well in my tests, why is it disabled by default in go-ipfs?
<Mateon1>
For a rather long time I ran with --enable-multiplex-experiment, it works well
<Mateon1>
Not sure whether two nodes with multiplex actually use multiplex rather than yamux, though
<Caterpillar>
there is one thing I have not understood by reading ipfs documentation. How a normal user can browse a website hosted on IPFS network? Does he need to install a IPFS client / extension for his browser?
<whyrusleeping>
dignifiedquire: because its not been well tested yet
<whyrusleeping>
Mateon1: you can check the output of `ipfs swarm peers -v --enc=json`
<whyrusleeping>
it lists the stream types in the json verbose output
infinity0 has quit [Remote host closed the connection]
<Caterpillar>
there is one thing I have not understood by reading ipfs documentation. How a normal user can browse a website hosted on IPFS network? Does he need to install a IPFS client / extension for his browser?
infinity0 has joined #ipfs
yoshuawuyts has joined #ipfs
<brodo>
@Caterpillar currently you use a local proxy
<Kubuxu>
Caterpillar: yes, or use https:/ipfs.io/ipns/ipfs.io
<Caterpillar>
[15:17] <Kubuxu> ipfs.io is running on ipfs :p
<Caterpillar>
but you can access to it even without running an ipfs daemon.
<Caterpillar>
mmh...
<victorbjelkholm>
Caterpillar, yeah, the public gateway that we run at ipfs.io/ipfs is being used to serve ipfs.io
<Caterpillar>
the best way to clear all my dubts is to ask you the following question: does ipfs act like a sort of Tor network, where clients have to use a specific network client to use the network?
Guest51180 has quit [Ping timeout: 265 seconds]
<Kubuxu>
Caterpillar: aim is so that every user runs the client
<Kubuxu>
currently we provide gateway so users who don't have a client installed can use it too
<Caterpillar>
ah ok
jong[m] has left #ipfs ["User left"]
espadrine has quit [Ping timeout: 260 seconds]
espadrine has joined #ipfs
bastianilso_____ has quit [Quit: bastianilso_____]
bastianilso_____ has joined #ipfs
<Caterpillar>
stupid question: I have a server with a symmetric 100 Mbit/s internet connection. Is there anything useful I can do to help the ipfs network?
theoanthropomorp has joined #ipfs
<kpcyrd>
Caterpillar: pin good content (seeding)
<Caterpillar>
kpcyrd: content generated by me'
<Caterpillar>
?
<kpcyrd>
Caterpillar: this isn't automated, so you actually have to decide which content you consider good. I ran an archlinux mirror for a while (but the disk crapped out and I still didn't fix it)
<Caterpillar>
kpcyrd: mmh I am a Fedora contributor, I could run a sort of Fedora mirror
<Caterpillar>
kpcyrd: yeah, a Fedora package would be awesome, but there is the need to unbundle all third party bundled libraries I saw in https://github.com/ipfs/go-ipfs
<kpcyrd>
Caterpillar: are there examples for go packages in fedora?
<Caterpillar>
kpcyrd: could you suggest me a name of a famous go based software? So that I can search it
<Caterpillar>
on Fedora
<kpcyrd>
Caterpillar: you don't have to create the content if you want to pin it. For example, if you consider /ipfs/QmZzQDki6px3Li7fDxXAqUbVA8zyu44vo9RRgsYXffvHfP a good collection of butter and margarine, you can run `ipfs pin add QmZzQDki6px3Li7fDxXAqUbVA8zyu44vo9RRgsYXffvHfP` and that mostly means "get the content, keep it around and provide it to people asking for it"
<whyrusleeping>
i'll try and pin the latest every so often
<Mateon1>
:)
<Mateon1>
There is a lot of deduplication happening, as most of the time I only need to change /archives/index.html, the second to last comic, and add index.html for last comic. (plus the image of course)
<ligi>
lgierth: tomorrow 14:25
maxlath has quit [Ping timeout: 252 seconds]
<lgierth>
Mateon1: nice -- wanna post that to the ipfs/archives repo?
<Mateon1>
The diff of the last two archives looks nice: QmWR6UgArrkdaVMULMH5MwdBxTna2WFU1p83nLCMSYZiDW/diff.txt
<lgierth>
ligi: cool cool :) i'm at home watching from the couch
<Mateon1>
lgierth: Sure
<ligi>
lgierth: enjoy - but we have also couches here ;-)
<lgierth>
:)
<whyrusleeping>
lgierth: he did, on the xkcd issue
<lgierth>
oh cool -- i didn't actually look at the issues :P just took the existing archives website
<brodo>
"We Fix the Net session will include talks on developing secure alternatives to current internet protocols. We might hold an organized discussion or panel as well. This session is organized by GNUnet and pEp, and acts as a successor to the YBTI sessions of previous years."
<lgierth>
i disagree with the sentiment that "someone broke the internet", but i applaud the effort nonetheless :)
<lgierth>
the problems we see today originate from the very beginnings of the internet
ygrek_ has quit [Ping timeout: 256 seconds]
<brodo>
@lgierth yea, the same is true for the web. timBLs paper for the hypertext conference was rejected for a reason.
gigq has quit [Read error: Connection reset by peer]
<lgierth>
brodo: what was the matter with that paper?
<lgierth>
interesting
gigq has joined #ipfs
<brodo>
i think one reason was that web links are one-directional which was very unusual for the time
<mildred>
that allows storing any value for any key. Currently, the way the current IPNS does, and the way public keys are stored is that the key /pk/<fingerprint> is associated with the CID of its value. To get a public key, you this have a level of indirection. First, fetch the content id/content hash for the key in the ValueStore DHT, then find out where this value could be found using the ContentRouting interface that yields a list of peers. I am trying to
<mildred>
construct a record system with the current routing layer from IPFS. My question is why do we need an extra level of indirection for things such as namling records or public keys, and why couldn't the CID (content ID) refer to things like "an object with such multihash" or "a public key with such fingerprint" or even "a naming record using such method and suck key". We could extend the CID to allow representation of these references and avoid an extra
<mildred>
level of indirection. We could directly ask the DHT which peers provide such public key, without indirection. Anyone to help me understand?
mguentner2 is now known as mguentner
Encrypt has quit [Quit: Quit]
infinity0 has joined #ipfs
<Kubuxu>
mildred: I don't see the "indirection" here
<Kubuxu>
to get from the fingerprint to a record we need the record and the public key
pp99s has quit []
<Kubuxu>
fingerprint is the peerID in this case
<mildred>
the thing is different, we know the fingerprint and we want the complete key
<Kubuxu>
yup
<mildred>
(fingerprint is just an example)
<Kubuxu>
we could unify everything to be a ipfs block yes
<Kubuxu>
it just wasn't done before as we have no concept of CID and there was clear separation
<mildred>
I want to draft on my side a naming system
vorce[m] has joined #ipfs
<mildred>
You're right, before CID didn't exist
<mildred>
now that it does, we could unify things...
<mildred>
(I may not be fully available now)
<pinbot>
[host 7] failed to pin /ipfs/QmNQ1TQzC3D4MMJFN5LVCKPBryRg968QTCvQ29miamxQq8: 504 Gateway Time-out
<pinbot>
[host 2] failed to pin /ipfs/QmNQ1TQzC3D4MMJFN5LVCKPBryRg968QTCvQ29miamxQq8: 504 Gateway Time-out
<pinbot>
[host 1] failed to pin /ipfs/QmNQ1TQzC3D4MMJFN5LVCKPBryRg968QTCvQ29miamxQq8: 504 Gateway Time-out
<pinbot>
[host 5] failed to pin /ipfs/QmNQ1TQzC3D4MMJFN5LVCKPBryRg968QTCvQ29miamxQq8: 504 Gateway Time-out
<pinbot>
[host 4] failed to pin /ipfs/QmNQ1TQzC3D4MMJFN5LVCKPBryRg968QTCvQ29miamxQq8: 504 Gateway Time-out
<pinbot>
[host 0] failed to pin /ipfs/QmNQ1TQzC3D4MMJFN5LVCKPBryRg968QTCvQ29miamxQq8: 504 Gateway Time-out
yoshuawuyts has joined #ipfs
<pinbot>
[host 3] failed to pin /ipfs/QmNQ1TQzC3D4MMJFN5LVCKPBryRg968QTCvQ29miamxQq8: 504 Gateway Time-out
<pinbot>
[host 6] failed to pin /ipfs/QmNQ1TQzC3D4MMJFN5LVCKPBryRg968QTCvQ29miamxQq8: 504 Gateway Time-out
<ianopolous_>
I'm looking into browser caching of requests and it seems that things like /ipfs/$multihash are cached, but /api/v0/object/get?arg=$multihash do not work, presumably because the key is in the query rather than the path of the url. I've noticed that the dag api takes the same un-cachable form. Is that deliberate?
infinity0 has quit [Remote host closed the connection]
<victorbjelkholm[>
Hm, we should be able to leverage ETag to solve that. Don't think it's deliberate, should be cached if the arg is the same
infinity0 has joined #ipfs
<ianopolous_>
I don't think the ipfs server currently includes the etag header on that. However I've tried adding it to a local version, and chrome still doesn't cache it
<deltab>
do you have something intercepting that, or does it go directly to ipfs.io?
wallacoloo_____ has joined #ipfs
<ianopolous_>
I have my own server which I've made emulate the ipfs api
<deltab>
and the name?
<ianopolous_>
name of what?
<lgierth>
ianopolous_: we've just never had someone come up with a demand for Etag headers on the api :)
<deltab>
hostname
<ianopolous_>
:-)
<ianopolous_>
The local server i've tested adding the head on for object queries is running on localhost (caching works fine for other localhost urls without the query variation)
<deltab>
what I mean is, are you accessing your local server?
<lgierth>
the semantics of etag would be highly dependant on the specific endpoint
<ianopolous_>
yes accessing my own local server which emulates the ipfs http api over localhost
<ianopolous_>
sure, but any content adddressed get should be cachable forever
infinity0 has quit [Remote host closed the connection]
<lgierth>
oh sure, but depending on which other arguments you pass
<lgierth>
i'd be down to add it
<lgierth>
PRs welcome :)
<lgierth>
you can also open an issue to discuss
<ianopolous_>
lgierth: my usecase is that peergos uses entirely the object api, before now we had our own http api, with caching working, but I've moved to emulate the ipfs api to enable us to self host in ipfs.
<ianopolous_>
I'm hoping to move the to simpler dag api, once it supports cbor, but it currently has the same non caching behaviour
<lgierth>
yeah it makes sense
<ianopolous_>
sure I'll open an issue, in go-ipfs?