voxelot has quit [Remote host closed the connection]
<drathir>
wth is goin on there?
voxelot has joined #ipfs
voxelot has joined #ipfs
voxelot has quit [Changing host]
M-mubot has joined #ipfs
M-davidar-test has joined #ipfs
M-rschulman has joined #ipfs
M-alien has joined #ipfs
M-staplemac has joined #ipfs
M-fil has joined #ipfs
<lgierth>
whyrusleeping: it went up to 1,55m when dignifiedquire was pinning stackexchange
<drathir>
M-whyrusleeping: its You You ?
M-Peer2Peer has joined #ipfs
M-harlan has joined #ipfs
simonv3 has joined #ipfs
M-rschulman1 has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
joshbuddy has joined #ipfs
ralphtheninja has quit [Remote host closed the connection]
ralphtheninja has joined #ipfs
<fuzzybear3965>
lgierth, Is biham a server that runs an ipfs node?
<fuzzybear3965>
I'm curious what you're talking about
<fuzzybear3965>
*. But, I only have ~5 minutes before I need to leave to catch a bus.
disgusting_wall has joined #ipfs
<fuzzybear3965>
Is >1 million goroutines indicative of how active ipfs is?
<fuzzybear3965>
Is biham responsible for routing traffic and storing files such that the number of spawned goroutines is indicative of the level of activity of the network?
<lgierth>
fuzzybear3965: biham is just one of the ipfs nodes with a big disk
<fuzzybear3965>
lgierth, I gotta bounce. If you answer these questions then I will read them later (https://botbot.me/freenode/ipfs/).
<lgierth>
where we store big datasets
<lgierth>
it has 17T storage
<fuzzybear3965>
Oh, okay. What does the level of activity tell you?
<lgierth>
that there's a problem :)
<fuzzybear3965>
Oh.
<lgierth>
seems to be one goroutine per object or so and that's clearly wrong, they should get collected eventually
<lgierth>
and we should never get *so* high
<fuzzybear3965>
If 1 million people were actively sharing files using ipfs, would that spawn ~1 million goroutines?
<fuzzybear3965>
Or, is that a completely different thing entirely?
<fuzzybear3965>
I.e. What actions spawn goroutines on bihamL?
<lgierth>
yeah these goroutines came from just adding data to it
<fuzzybear3965>
*bimham?
<fuzzybear3965>
Oh, okay.
<fuzzybear3965>
Hmmmm.... Okay. I gotta jet.
<fuzzybear3965>
Thanks for the info.
<lgierth>
see you later
<fuzzybear3965>
Haha, literally.
fuzzybear3965 has quit [Quit: Leaving]
NightRa has quit [Quit: Connection closed for inactivity]
wiedi has joined #ipfs
wiedi_ has quit [Ping timeout: 240 seconds]
patcon has quit [Ping timeout: 264 seconds]
<whyrusleeping>
mafintosh: okay fine, you win
<whyrusleeping>
feross: i'm looking at you, still havent gotten a google docs thing
user24 has quit [Ping timeout: 256 seconds]
reit has joined #ipfs
<whyrusleeping>
lgierth: did you get me a stack dump from biham?
<whyrusleeping>
i'd like to take a look
<whyrusleeping>
i have my suspicions about what it is, and its probably what i think it is, but i just want to confirm
<lgierth>
yeah fuck i think it just restarted :/
<lgierth>
meh
<lgierth>
after it so ton of load for ~20min
<lgierth>
*saw
<lgierth>
load 25 is a bit much for 8 cores
vijayee has joined #ipfs
<whyrusleeping>
ouch
voxelot has quit [Ping timeout: 245 seconds]
xelra_ is now known as xelra
hoony has joined #ipfs
<lgierth>
whyrusleeping: are the logs interesting though?
<lgierth>
they're in /var/lib/docker/containers/f5089c55c6339e187853ddea851097648b69063bac21e26b394d111aad12c742/
<lgierth>
copied them to /root
<lgierth>
whyrusleeping: do you want both ipfs.cpuprof/ipfs.memprof and the dumps of :5001/debug/pprof?
<reit>
i know i can `echo hello | ipfs block put` to create custom chunks
<reit>
but how do i stick them together into an actual object?
<reit>
my first thought was object patch, but that requires a name and a key - normal IPFS objects have a simple list of anonymous chunks and a few (probably important) unicode characters in 'Data'
<Kubuxu>
reit: There is IPLD spec in process of finalization, that will allow you just that. Current format is not prepared for making custom block easily.
<reit>
i see, so basically for now just wait then
s_kunk has joined #ipfs
Senji has quit [Disconnected by services]
Senj has joined #ipfs
Senj is now known as Senji
computerfreak1 has quit [Read error: Connection reset by peer]
computerfreak has quit [Remote host closed the connection]
hartor1 has joined #ipfs
hartor has quit [Ping timeout: 240 seconds]
hartor1 is now known as hartor
Encrypt has quit [Quit: Quitte]
<whyrusleeping>
good mornin everyone
<dignifiedquire>
whyrusleeping: good monring
<dignifiedquire>
how is the north?
<whyrusleeping>
its pretty nice!
<whyrusleeping>
its snowing right now
<whyrusleeping>
but its still warming than berlin
<dignifiedquire>
that sounds nice
<whyrusleeping>
yeah
<whyrusleeping>
and daviddias got his luggage finally
<whyrusleeping>
so everything is going great
<dignifiedquire>
what happened to his luggage oO
<dignifiedquire>
did they send it back to Portugal?
<whyrusleeping>
SAS lost it
<dignifiedquire>
lol
<dignifiedquire>
that sucsk
<whyrusleeping>
yeeeep
<dignifiedquire>
soooo pinning is not working out so great I heard from lgierth :P
<whyrusleeping>
meh
<whyrusleeping>
did you do the refs thing first?
<dignifiedquire>
no
<dignifiedquire>
what's the difference?
<whyrusleeping>
uhm
<whyrusleeping>
UX?
<dignifiedquire>
ah okay
<whyrusleeping>
but refs shouldnt ever time out
<whyrusleeping>
could you try running the pin again?
Senji has quit [Read error: Connection reset by peer]
Senji has joined #ipfs
SpX has joined #ipfs
<dignifiedquire>
refs is telling me my path is invalid
<dignifiedquire>
nvm I'M stupid
<dignifiedquire>
it has begun
<dignifiedquire>
this time with ipfs refs -r && ipfs pin
Encrypt has joined #ipfs
<dignifiedquire>
shouldn't the pin be recursive as well though?
__konrad_ has quit [Remote host closed the connection]
__konrad_ has joined #ipfs
guest23223 has quit [Quit: bbl]
<ipfsbot>
[go-ipfs] whyrusleeping pushed 2 new commits to master: http://git.io/vuMMQ
<sivachandran>
is there such command? couldn't locate in the doc.
jaboja has joined #ipfs
<achin>
yep. start with `ipfs object --help` and drill down from there
<achin>
(or even `ipfs --help`)
<NeoTeo>
whyrusleeping: are you ok with finding that place or want to meet somewhere you know how to get to?
<sivachandran>
Thanks. Basically I am trying to do this: say I've added a directory D1 with files F1 and F2 to IPFS. Now I want to add another directory D2 with files F1, F2 and F3. As the files F1 and F2 already exist in IPFS I don't want to duplicate it in the storage.
<whyrusleeping>
NeoTeo: that works, are you there right now?
<NeoTeo>
Will be there in 20 mins
<whyrusleeping>
sivachandran: just add the new directory
<whyrusleeping>
it will all be okay :)
<NeoTeo>
whyrusleeping: \o/
<sivachandran>
whyrusleeping: In D2 I have only F3 but I want to stitch F1 and F2.
<achin>
F1 and F2 will always have the same hash, no matter what directory they are in
<achin>
so there will be no duplication
<whyrusleeping>
NeoTeo: cool, daviddias and i will head up soon
<whyrusleeping>
sivachandran: ah, so D2 doesnt actually contain F1 and F2 on disk?
<sivachandran>
yes, it doesn't
<whyrusleeping>
ah, then yeah
<whyrusleeping>
take HASH=(hash of D1)
<whyrusleeping>
then 'ipfs object patch $HASH add-link F3 <hash of F3>'
<sivachandran>
think that I am downloading directories and adding them to IPFS. The directories can share files. To save bandwidth I will not download a directory's file if it is already present in IPFS. But still I want my D2 to contain F1, F2 & F3 in IPFS node.
<sivachandran>
thanks. I will try that.
<sivachandran>
Another question, is there a way to add sub-names in IPNS. I know I can publish an object hash on the node hash. But I want to add more objects under the names.
rombou has joined #ipfs
<achin>
do the ipfs.io gateways know how to talk to 0.4.0 nodes?
<sivachandran>
Suppose if I publish a directory and then add a file to the directory, I want to see the newly added file when I resolve the directory.
<whyrusleeping>
NeoTeo: ETA ~20min
<achin>
i don't think ipfs has built-in a way to difference two trees, but i thought someone in here wrote a tool to do that
sivachandran has quit [Remote host closed the connection]
hashcore has quit [Ping timeout: 250 seconds]
<lgierth>
dignifiedquire: did pinning succeed?
<dignifiedquire>
lgierth: still running
<lgierth>
ok
<lgierth>
goroutines went up a bit but are plateauing now
<lgierth>
~23k
<NeoTeo>
whyrusleeping: Cool, I'm there now. Place is packed:/
<dignifiedquire>
not using pinning right now, it's just running refs -r atm
<dignifiedquire>
and when that's finished it'll pin those files
e-lima has quit [Ping timeout: 250 seconds]
<richardlitt>
whyrusleeping dignifiedquire daviddias please take a look at this asap, I want to keep going and I want to know if this format is easily understood and clear to everyone: https://github.com/ipfs/api/pull/17
<dignifiedquire>
richardlitt: it must be a post of type form/multipart
<dignifiedquire>
as far as I understand
<richardlitt>
dignifiedquire: Cool. Can you throw that in the issue? How does the issue look?
ylp1 has quit [Quit: Leaving.]
<richardlitt>
Like, does the format of PRs per each command make sense? Do you see what I am doing?
ulrichard has quit [Remote host closed the connection]
<dignifiedquire>
yes that makes a lot of sense
<dignifiedquire>
makes it much easier to review and discuss
pfraze_ has quit [Remote host closed the connection]
<richardlitt>
Cool.
<richardlitt>
I'm also adding everything I can. I know it's silly to have headers, attributes, and body, but I want this API to be complete, even if redundant
<whyrusleeping>
NeoTeo: wanna meet at the Lego store?
<whyrusleeping>
I really wanted to go
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
<NeoTeo>
whyrusleeping: ok, be there in 10
<whyrusleeping>
Wooo!
mildred has quit [Quit: Leaving.]
arpu has joined #ipfs
<dignifiedquire>
richardlitt: not silly at all, badly needed I'd say
pfraze has joined #ipfs
voxelot has joined #ipfs
rombou has quit [Ping timeout: 265 seconds]
step21_ is now known as step21
zorglub27 has joined #ipfs
libman has joined #ipfs
Not_ has joined #ipfs
M-erikj is now known as erikj`
M-rschulman1 is now known as rschulman
step21 is now known as step21_
rombou has joined #ipfs
zugz has quit [Quit: Lost terminal]
rombou has quit [Quit: Leaving.]
pfraze has quit [Remote host closed the connection]
mrdomino_ has joined #ipfs
mrdomino has quit [Ping timeout: 255 seconds]
corvinux has joined #ipfs
<brimstone>
richardlitt: excellant pr, i was looking for that yesterday
ispeedtoo has quit [Ping timeout: 252 seconds]
supertyler has quit [Ping timeout: 252 seconds]
simonv3 has joined #ipfs
joshbuddy has joined #ipfs
libman has quit [Ping timeout: 245 seconds]
libman has joined #ipfs
asyncsrc has joined #ipfs
corvinux has quit [Remote host closed the connection]
zeroish has joined #ipfs
devbug has joined #ipfs
devbug has quit [Ping timeout: 260 seconds]
ralphtheninja has quit [Remote host closed the connection]
s_kunk has quit [Ping timeout: 240 seconds]
rombou has joined #ipfs
joshbuddy has quit [Quit: joshbuddy]
Encrypt has quit [Quit: Quitte]
lempa has joined #ipfs
zorglub27 has quit [Remote host closed the connection]
SpX has quit [Ping timeout: 255 seconds]
ralphtheninja has joined #ipfs
zorglub27 has joined #ipfs
mildred has joined #ipfs
devbug has joined #ipfs
rombou has quit [Ping timeout: 265 seconds]
<chriscool>
Ladee: just ask or say what you want and then people interested to talk about the same thing may answer
pfraze has joined #ipfs
libman has quit [Ping timeout: 260 seconds]
mrdomino_ is now known as mrdomino
rombou has joined #ipfs
ygrek has joined #ipfs
libman has joined #ipfs
prosody is now known as qrosody
neurosis12 has quit [Remote host closed the connection]
hartor has quit [Ping timeout: 260 seconds]
jaboja has quit [Remote host closed the connection]
joshbuddy has joined #ipfs
ispeedtoo has joined #ipfs
palliate has quit [Ping timeout: 256 seconds]
M-jon is now known as jfred-matrix
libman has quit [Ping timeout: 265 seconds]
Pria has joined #ipfs
libman has joined #ipfs
hartor has joined #ipfs
rendar has quit [Ping timeout: 260 seconds]
patcon has joined #ipfs
devbug has quit [Ping timeout: 260 seconds]
ispeedtoo has quit [Ping timeout: 252 seconds]
reit has quit [Ping timeout: 245 seconds]
rendar has joined #ipfs
<whyrusleeping>
Anyone know how big the ethereum blockchain is right now?
patcon has quit [Ping timeout: 240 seconds]
arpu has quit [Quit: Ex-Chat]
<mildred>
jbenet, anyone: It would be nice if we could decide on the pull request https://github.com/ipfs/specs/pull/59 do we want to discuss more on the format, or do we choose what we decided more or less already: use an escaping method with the @ character.
disgusting_wall has joined #ipfs
mildred1 has joined #ipfs
<libman>
I refuse to touch Ethereum just because it's GPL.
<oed>
libman: cpp-ethereum is MIT
<oed>
whyrusleeping: 5.6G
<libman>
I didn't notice that. Which implementation do most people use?
<oed>
the go one
<oed>
cpp is second
vijayee has joined #ipfs
joshbuddy has quit [Quit: joshbuddy]
<dignifiedquire>
mildred: I'm guessing jbenet is still working through is two weeks offline backlog
<dignifiedquire>
*his
Not_ has quit [Ping timeout: 260 seconds]
ygrek_ has joined #ipfs
<mildred1>
didn't know, thanks. Anyone else interested can also take part
patcon has joined #ipfs
<jbenet>
mildred: yeah dignifiedquire is right but i'm prioritizing ipld because we need that in soon
<jbenet>
happy to look at stuff now
ygrek has quit [Ping timeout: 250 seconds]
<dignifiedquire>
mildred1: trying my best to understand it, but still very new for me
M-fil has quit [Quit: node-irc says goodbye]
M-fil has joined #ipfs
rombou has quit [Ping timeout: 276 seconds]
<vijayee>
jbenet what is the state of the ipld idea
<vijayee>
whyrusleeping with the libp2p initiative is it still feasible to use ipfs as service for network communication as in the website example?
Senji has quit [Read error: Connection reset by peer]
<patagonicus>
I'm thinking of providing a mirror for linux packages (probably for Alpine, maybe Gentoo) on IPFS - is there an efficient way to handle updating the IPFS directory object that is published? As in: don't hash everything every time it's rsync'd from upstream.
<mildred>
vijayee: still in spec state. Pretty close now
<mildred>
go-ipld not usable still
Senji has joined #ipfs
<The_8472>
is this just about serialization/representation? because if that's the case then using arbitrary names as hash keys seems like a bad design
<vijayee>
sweet is gossip communication considered in the spec? for thng other than ipfs' routing?
<The_8472>
just have a two-level hash nesting
<vijayee>
mildred: is the spec public? I'll shutup and read it if that is the case
<mildred1>
we will need gossip up update ipns records
<mildred1>
vijayee: github.com/ipfs/specs/pull/37 (from memory) and don't shut up :)
<vijayee>
thanks mildred
<vijayee>
thanks mildred1
<The_8472>
mildred1, why not { attrs: {...}, namedLinks: {...}} ?
<mildred>
The_8472: because we need to be able to follow paths.
<mildred>
and be compatible with URL that we already have
<The_8472>
that's why i asked whether it's just about representation in serialized form
jfis has joined #ipfs
<mildred>
We could change the way we envision the path algorithm to allow the namedLinks key. That's not a bad idea compared to the escaping mechanism we have
<mildred>
But we are on the spec since so long, I don't know if it is wise changing that now that we are close to finishing it
<The_8472>
well, out of the two options in the PR the 2nd one seems closer to directory semantics
<The_8472>
hrrm, if i understand it correctly it's the difference between addressing the paths themselves vs. traversing the metadata structures describing those paths
<The_8472>
wouldn't a different namespace make sense then? just go through /ipld/ instead of /ipfs/?
<mildred>
The_8472: that's an idea. But perhaps we will want to use these paths for other prefixes than /ipfs (like /ipns). In any case, I'm not the one pushing for it. An API is fine for me. See https://github.com/ipfs/specs/pull/37#issuecomment-158766176
<The_8472>
well... just nest it /ipld/ipfs/hash/path ?
<mildred1>
that would work
<The_8472>
it's a form of reflection. equivalent to calling stat() on filehandles i guess. it's something outside the scope of the regular filesystem... unless there's a stat-fuse somewhere ^^
patcon has quit [Ping timeout: 272 seconds]
<The_8472>
this also looks a lot like XML nodes
<The_8472>
namespaces, list of children, list of attributes
libman has quit [Remote host closed the connection]
<richardlitt>
brimstone: thanks!
maxlath has joined #ipfs
zorglub27 has quit [Ping timeout: 272 seconds]
maxlath is now known as zorglub27
step21_ is now known as step21
qrosody is now known as procody
akkad has joined #ipfs
hartor1 has joined #ipfs
hartor has quit [Ping timeout: 250 seconds]
hartor1 is now known as hartor
procody is now known as [Prosody]
joshbuddy has joined #ipfs
Not_ has joined #ipfs
devbug has joined #ipfs
<jbenet>
sorry, had to handle other stuff. happy to look at things now
<jbenet>
patagonicus: talk to daviddias about that -- preferably on github.com/ipfs/notes and see the issue about npm and pacman. see also https://github.com/diasdavid/registry-mirror
<jbenet>
patagonicus: we will be devoting some time to making this better and easier (the package manager use case) over next weeks so now is a good time to get involved and do this
<whyrusleeping>
vijayee: re libp2p, yes
<whyrusleeping>
that should be the same
<jbenet>
(cc The_8472 mildred) one design goal for IPLD is allow people with existing json datastructures to dump their data _as is_ into ipfs and get a path-traversable web API out of it. over time, they can selectively add merkle-linking (and other linking) to transform their existing data dumps into better-linked data. but critical to have very low barrier of
<jbenet>
entry and no "You must change your format right away" types of things. we _can_ do some minimum things like escaping key names and so on, but we shouldn't force people to change the structure/shape of the data, or reject data as invalid straight-up
<jbenet>
another goal is to allow people to easily construct data models as they're used to -- data models expressible as simple json.
<The_8472>
then you're mixing the ipfs structure with arbitrary json data?
<jbenet>
The_8472 the json structure is a representation, we will serialize to cbor
<jbenet>
The_8472 but the data model is compatible
<The_8472>
seems like an impedance mismatch to me
<vijayee>
whyrusleeping: thanks
<jbenet>
The_8472 i dont understand where the mismatch is here-- people already have JSON data, we can import it as is and make it accessible through ipfs.
<jbenet>
The_8472 when we import it, we might do a few changes (like escape things and serialize it to cbor) but people can (in the general case) get out what they put in.
<The_8472>
yeah the "escape things" thing :)
<The_8472>
I generally abhor escaping inside a flexible data structure... that's what the structure is for
Senji has quit [Ping timeout: 260 seconds]
mildred1 has quit [Ping timeout: 264 seconds]
<jbenet>
agreed. but it's a bit different here, because we're creating a flexible structure for others to make structures _with_. think of it more like "designing the language" instead of "using the language to represent something". but yes, it is dirty because we're using both json, and cbor to represent it, etc. there's a lot of design constraints here
cemerick has quit [Ping timeout: 255 seconds]
hashcore has joined #ipfs
<The_8472>
guess i'm seeing the tensions arising from the constraints without having yet seen all the constraints
<jbenet>
for brevity, i did not enumerate all constraints, but they can be found on github either on that PR or on https://github.com/ipfs/go-ipld etc.
<mildred>
escaping would be just for JSON representation
<ipfsbot>
ipfs/master a1e1bcc Richard Littauer: Merge pull request #143 from doesntgolf/patch-1...
<lgierth>
eh no wait two of these are still running 0.3.x
<lgierth>
here this one is good to go: pluto.i.ipfs.io
zorglub27 has quit [Quit: zorglub27]
<achin>
nice!
<achin>
let's see if this'll load! it's a massive tree with about 600000 links
<patagonicus>
jbenet: Ok, will do. I think ipfs files + some light shell scripting will do for now.
<jbenet>
lgierth yay
<lgierth>
my template engine is literally 2 lines of bash, plus 2 more lines for the func{} wrapper, lol
* achin
acquires burrito while waiting for ipfs
<lgierth>
anyhow, the real problem i'm trying to solve is that the big 3 (ansible puppet chef) come with more indirection than neccessary -- they make it really hard to see wtf is going on
<jbenet>
patagonicus: you should be able to bootstrap, pluto's on 0.4.0 iirc
<jbenet>
patagonicus: lgierth knows better than me o/
<lgierth>
mh! yeah i'm going to take a closer look tonight -- last time i just said naaah
<jbenet>
mildred: thank you for bearing with me on this difficult pathing decision
<jbenet>
mildred: it is very tricky! :O
<lgierth>
patagonicus: yes it should contain one, pluto.i.ipfs.io with peerid starting in QmSoLP or QmSoLp
<jbenet>
lgierth: i'd likely start with otto-- in my experience hashicorp stuff is generally well scoped.
<mildred>
jbenet: we still have something not specified: how does the IPFS implementation knows which is the canonical format to convert the object into to get the hash.
<patagonicus>
Not sure what went wrong, connect would just hang. Restarted the daemon, then it worked. Maybe all the ipfs files write calls were putting too much load on the daemon so it couldn't handle the request to connect.
<mildred>
Another solution than the @codec key: The ipfs implementation can also store the codec right next to the ipld object, and we can enforce this in IPLD
hellertime has quit [Quit: Leaving.]
<mildred>
in the ipld package, the function to compute the hash of an object could take a mandatory argument, which would be the original codec
<mildred>
this could be stored by ipfs using the wire representation: a multicodec header and the data
<mildred>
if we decide to use the @codec key within the IPLD representation, what if we receive an IPLD object formatted in CBOR but with a @codec=json. Receiving, we check that the hash is correct with the CBOR encoding, but when we decode and encode it back, we encode it in JSON according to @codec and we get a different hash. This is something we'll have to think about in the implementation
[Prosody] is now known as {prosody}
ashark has quit [Ping timeout: 264 seconds]
<mildred>
jbenet: you're right, we don't need different kind of merkle-paths if unixfs paths are already different.
hashcore has quit [Quit: Leaving]
<lgierth>
jbenet: yeah i think so too about hashicorp -- there *must* be some tool that doesn't constantly get in my way. a custom dsl rings all my alarms :P
hashcore has joined #ipfs
<jbenet>
mildred: re "which is the canonical format" maybe we could add that as a field in the representation when the representation IS NOT the canonical one? (eg in json, the toplevel object could carry `@codec: /cbor/tags/v1` or something
<jbenet>
patagonicus: yeah sounds like a bug. we need much more robust tests, if you can repro that reliably, would be very useful
<jbenet>
(i've seen it too)
<jbenet>
mildred: oh store the codec out of band? we could try this, yes. will make things more difficult when moving it across wires/apis/etc. (out-of-band data is always complicated)
<jbenet>
"if we receive an IPLD object in CBOR but with @codec=json" we should encode to json before checking the hash, i think.
pfraze has quit [Remote host closed the connection]
<mildred>
"if we receive an IPLD object in CBOR but with @codec=json" and we then decode to encode it back to json, that's a lot of processing. Or perhaps we should tell the other side of the socket that it should send in JSON and not CBOR.
<jbenet>
processing will be way faster than an RTT
<jbenet>
RTTs are usually the biggest cost.
<Shibe>
is ipfs.io the only tracker for ipfs?
<Shibe>
couldn't you take down "the permanent web" by ddosing ipfs.io?
<mildred>
we wouldn't RTT, if the other side is well behaved (it should as we implement it) it shouldn't send such things.
<mildred>
So this wouldn't happen, unless the other side is not well behaved. In which case we can convert it. But this would be a rare occurrence.
<Shibe>
ok
<Shibe>
mildred: and suppose ipfs becomes super popular and hundreds of thousands of sites start using it
<lgierth>
Shibe: there are no real trackers, ipfs.io are just the default nodes to connect to
<Shibe>
could ipfs.io handle the load?
<lgierth>
Shibe: you can connect to any node to become part of the swarm
<lgierth>
Shibe: well that depends on the size of the ddos of course -- digitalocean has relatively good defense though
<Shibe>
ok
<mildred>
jbenet: see PR #62 https://github.com/ipfs/specs/pull/62 which defines a single kind of merkle-paths (no more two different paths). With the distinction that unixfs paths are something different
<mildred>
jbenet: what if @codec contains something we don't understand?
<jbenet>
Shibe: we run some bootstrap nodes. there will be many discovery protocols, later on will even persist nodes to disk so finding the network will be easy