<whyrusleeping>
kyledrake: blocks is for content addressed ipfs objects
<whyrusleeping>
datastore is for the little bits of other stuff we need to store
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
fleeky has quit [Ping timeout: 272 seconds]
<ipfsbot>
[go-ipfs] lgierth pushed 1 new commit to discovery-cjdns: http://git.io/v4xJp
<ipfsbot>
go-ipfs/discovery-cjdns ad4cdf4 Lars Gierth: WIP...
ashark has quit [Ping timeout: 250 seconds]
<ipfsbot>
[go-ipfs] lgierth created seccat-context (+1 new commit): http://git.io/v4xU0
<ipfsbot>
go-ipfs/seccat-context 0b50080 Lars Gierth: seccat: fix secio context...
<gatesvp>
@jbenet: I'm planning to go over that IPLD spec this weekend. It's a great idea, I actually had a similar thought about 2 months ago when thinking through a replacement for HTTP POST... I like this premise
<ricmoo>
Are there any go folks here? I have a few questions understanding the key stretcher. a) what is the output value of a hash.Sum(nil)... Is it the same as hash.Sum("")? b) What does mac.reset do, as it doesn't seem to be in the documentation. Does it simply reset the mac to its initial conditions?
<whyrusleeping>
ricmoo: that function is basically taking the shared key that both parties securely shared, and using it to generate ephemeral keys with which to encrypt other things with
<davidar>
whyrusinning
<whyrusleeping>
i'm not the author of that code, so i dont claim to 100% understand it
<ricmoo>
Right... I'm implementing it in python and have the handshake *almost* done...
<ricmoo>
But I'm just making sure I implement key stretching the exact same way, otherwise it won't work. :)
<ricmoo>
And the documentation for go doesn't specify what taking the hash of a nil value does, not what a mac reset does...
water1_resistant has quit [Quit: Leaving]
<ricmoo>
For now I'm just assuming it does sane things... It'll be all or nothing, so if my assumption is wrong, it will simply fail, and I'll have to dig deeper into go. :)
<whyrusleeping>
If i want to grab one source file from a library and use it myself, how do i do copyright licenses and stuff?
<whyrusleeping>
its not quite a fork, because i'm going to use part of their library, but i need to modify other parts in a way that isnt compatible with what they have now
<davidar>
whyrusleeping (IRC): is it a compatible license?
<surajravi>
whyrusleeping: add it as a header (on top of the file)
<whyrusleeping>
erg... its lgplv3
<whyrusleeping>
but we're already using it in ipfs.
* whyrusleeping
science_dog.gif
<davidar>
whyrusleeping (IRC): be careful about what the line on "library" is drawn
<whyrusleeping>
and i need to take their 's3' code and rewrite it
<whyrusleeping>
i guess i could just write it 'from scratch'
<kyledrake>
Sounds like kindof a timewaster. If the idea is that go-ipfs has to be MIT/BSD license then LGPL won't work I think though.
<whyrusleeping>
so we should probably remove the s3 code thats already there...
jhulten_ has joined #ipfs
<kyledrake>
I'd be curious to try a version of the FS blockstore that doesn't split up into directories and see what the perf looks like on S3. THat might be an easier solution than putting S3 right into IPFS.
<whyrusleeping>
why is lgpl bad again?
<davidar>
Yeah, personally I don't have a problem with lgpl, but if you're trying to avoid that, you'd have to be careful about viral behaviour
<davidar>
whyrusleeping (IRC): it's not, jbenet doesn't seem to like it though :/
<kyledrake>
LGPL is fine if you're not mandating MIT/BSD
<kyledrake>
It's a business logic question (jbenet)
<whyrusleeping>
well the testing framework (sharness) we've been using since day 1 is gpl
<whyrusleeping>
soooooo, yolo?
<davidar>
whyrusleeping (IRC): but it's separate to the ipfs library?
<whyrusleeping>
but i guess thats not *part* of go-ipfs?
<davidar>
yeah
<whyrusleeping>
lol, okay, not just that
<davidar>
whyrusleeping (IRC): so, lgpl library is OK so long as you're only linking to it
<whyrusleeping>
go statically compiles
<davidar>
But of you're *integrating* it, you'll have problems
<davidar>
Yeah, static linking is tricky
<whyrusleeping>
huh, i guess go-ipfs is already lgpl
jhulten_ has quit [Ping timeout: 240 seconds]
<whyrusleeping>
our lru lib, our stream muxer, and our utp lib are gpl
<whyrusleeping>
i thought i changed that so he does a notice...
<lgierth>
i reverted something with friend a while ago, but that was juan's code
JasonWoof has quit [Read error: Connection reset by peer]
JasonWoof has joined #ipfs
jabberwocky has joined #ipfs
amstocker has quit [Ping timeout: 272 seconds]
jabberwocky has quit [Read error: Connection reset by peer]
jabberwocky has joined #ipfs
Senji has quit [Ping timeout: 246 seconds]
evanmccarter has joined #ipfs
disgusting_wall has joined #ipfs
disgusting_wall has quit [Client Quit]
Senji has joined #ipfs
kerozene has quit [Ping timeout: 252 seconds]
keroberos has joined #ipfs
keroberos is now known as kerozene
go1111111 has quit [Ping timeout: 260 seconds]
jhulten_ has joined #ipfs
mek_ has quit [Ping timeout: 246 seconds]
jhulten_ has quit [Ping timeout: 265 seconds]
go1111111 has joined #ipfs
livegnik_ has quit [Quit: Aaaaand it's gone!]
livegnik has joined #ipfs
Rubyist has joined #ipfs
<Rubyist>
Hello, I am trying to use IPFS on x86-64 Windows, and every time I try to do any action, from running cat on the readme file, to doing a simple ipfs ls /ipfs, I am getting an error like this:
<Rubyist>
Error: can't Lock file "C:\\Users\\Rubyist\\.ipfs/repo.lock": has non-zero size
devbug has joined #ipfs
<Rubyist>
And... after a few tries at nuking my repository and adding it again, I managed to get a working ipfs...
<Rubyist>
But if I want to access files
<Rubyist>
I can't use /ipfs/hash-name/file
<Rubyist>
I have to use hash-name/file
hoony has quit [Quit: hoony]
ygrek has quit [Ping timeout: 260 seconds]
<tperson>
Rubyist: Are you talking about the cli?
<Rubyist>
yeah...
Tv` has quit [Quit: Connection closed for inactivity]
<tperson>
What does `ipfs version` report?
<Rubyist>
ipfs version 0.3.10-dev
<tperson>
Do you get any errors when trying to use /ipfs/<hash>?
wowaname has quit [K-Lined]
<Rubyist>
yeah, invalid references
simonv3 has quit [Quit: Connection closed for inactivity]
hellertime has quit [Quit: Leaving.]
devbug has quit [Ping timeout: 240 seconds]
<achin>
what hash are you trying?
surajravi has quit [Quit: Leaving...]
interfect has joined #ipfs
<Rubyist>
the hash that I got when I init'd the system. In fact, any hash doesn't work with a /ipfs/ prefix
<Rubyist>
but every hash works without it
<Rubyist>
and I'm not sure why
jhulten_ has joined #ipfs
<achin>
what's the error?
jhulten_ has quit [Ping timeout: 250 seconds]
nicolagreco has joined #ipfs
ygrek has joined #ipfs
mek_ has joined #ipfs
<nicolagreco>
I finally read the SFS paper
<jbenet>
nicolagreco: yay! which one? there's a few. the big one is the thesis
<ricmoo>
Hooray!!! I have finished the handshake process in Python...
<ricmoo>
If whyrusleeping is still out there, thanks! I had to resort to simple go snippets to compare inner loops and what not...
<AtnNn>
Has there been any work lately on a key store?
<nicolagreco>
@jbenet I read SUNDR (not on SFS - but relevant), Escaping the evils and his master thesis
<nicolagreco>
I can't understand how I missed such a professor
<ricmoo>
Would anyone, by chance, know where the next entry point in the go client is for the wire protocol? I can now see random/interesting messages coming across the wire, but don't know exactly what all the possible requests/responses are.
<nicolagreco>
(he studied under my security professor (Kaashoek))
<nicolagreco>
it made me understand a lot of parts of IPFS (and design choices)
<jbenet>
nicolagreco: dm is one of the best distributed systems + security people in the world. top 10. (i'd say top 5).
<nicolagreco>
I am starting to think that SUNDR could be the next step on collaboratively editing
<jbenet>
nicolagreco: yeah-- i accidentally reinvented a lot before i found SFS-- i found SFS as i looked for a naming solution for ipfs.
<jbenet>
nicolagreco: like, i didnt find sfsro until i told dm about ipfs and he was like "oh yeah like sfsro?"
<nicolagreco>
ha!
<nicolagreco>
that is what happened when I first met timbl
<nicolagreco>
He had a name for anything I thought I invented
<jbenet>
nicolagreco: i haven't read SUNDR yet-- will take a look.
<jbenet>
nicolagreco: it's great meeting the masters of the jedi order, eh?
<nicolagreco>
:)
<jbenet>
also, i've been calling self-certified links "mazieres links"
<jbenet>
(to go with "merkle links" for cryptographic-hash links)
<nicolagreco>
I thought the m stood for merkle (somehow)
<jbenet>
yeah, mlink is a merkle link-- an unfortunate clash.
<nicolagreco>
@jbenet something I am putting a lot of thoughts on is the webdht vision of manu sporny
<jbenet>
btw, ipfs makes ld great because you dont have to go and fetch stuff from http. you can have it local.
<jbenet>
nicolagreco: yeah! manu mentioned here a while back-- i need to put effort into helping on that front. sorry have been busy.
guest234234 has quit [Ping timeout: 252 seconds]
<jbenet>
ohhhhhhhh manu, you should be aware of this effort: http://www.weboftrust.info/ -- good stuff coming out of it
<nicolagreco>
I feel I will very likely work on that (webdht) or similar solutions
<nicolagreco>
(actually I am happy to help you to help manu)
<nicolagreco>
jbenet I know, I have seen that, I have been following and reading all the drafts/content
<jbenet>
whyrusleeping lgierth: i think the osx stall problem may be in master. i recall a recent first pr that had this-- was it maybe the watermark gc? not sure
<nicolagreco>
I got linked that, but I haven't gone through it yet
<jbenet>
nicolagreco: i think that format completely misses the point.
<jbenet>
nicolagreco: it makes no sense to address something under an http URL.
<jbenet>
nicolagreco it's a useful hack to add trust to _URLs_, but people should not be _relying on_ locations when you have hashes.
<nicolagreco>
relying on location, you mean relying on a url?
<nicolagreco>
in the sense of unique location?
<jbenet>
yep
<interfect>
OK, so, FUSE.
<interfect>
Why doesn't it?
<interfect>
I'm on Ubuntu 15.10, I can read and write /etc/fuse.conf and /ipfs
<interfect>
But when I ipfs mount I get "ERROR core/comma: error mounting: fusermount: exit status 1 fusermount: exit status 1 mount_unix.go:219" from the daemon
<nicolagreco>
I agree, trusty uris is the sort of idea that original sfs paper - was /sfs/host:hostid (with the hash of the content instead of the hash of the public key)
kerozene has quit [Ping timeout: 265 seconds]
<jbenet>
interfect: can you write to /ipns too?
<interfect>
I should have been able to.
<interfect>
Now I definitely can, I chowned it me:fuse instead of just root:fuse
<interfect>
And now it seems to work.
<Stskeeps>
random question, can you write to /ipfs/ or is it RO? or is storage only through the http interface
<interfect>
Why do I need write access to fuse.conf? It didn't write anything there.
<jbenet>
Stskeeps: today you cannot write to /ipfs, but we may have a path like `/ipfs/mfs/...` that writes to mfs (in dev0.4.0+)
<interfect>
It could be the groups not updating. I tried making new login shells and fuse showed up in groups for them, but maybe I have a personal fuse that isn't running under the new shells or something?
<interfect>
I'll try again after I reboot and let yall know if something is still broken
Rubyist has quit [Read error: Connection reset by peer]
<ilyaigpetrov>
Are IPNS/IPFS liable to censorship by public key/hash the same way as IP/DNS by ip/name?
<nicolagreco>
jbenet (curiosity time!) I am curious how ipfs will implement different namespaces, e.g. /ipfs /ipns /torrent /bitcoin
<nicolagreco>
(in other words) who decides on the naming
<jbenet>
Ah we won't. That should be the protocol scheme identifier space + system /local
<jbenet>
Like /http /ftp etc
<nicolagreco>
dm mentions the idea of having for example a /verisign that acts as a CA and hence can verify /verisign/mit/ and similar
<jbenet>
Yeah we can do the same thing with ipns
fingertoe has joined #ipfs
<nicolagreco>
(yes I know about ipns)
<jbenet>
/ipns/<versignskey>/mit
<nicolagreco>
jbenet I think either here or somewhere I think read about the example of /torrent and /bitcoin
<jbenet>
dm has the symlink in the fs
<jbenet>
We just have dnslinks and other type of human readable links
<nicolagreco>
when you say "we won't" you mean that you will never add these root paths
<nicolagreco>
or you won't decide on the naming?
<jbenet>
You can always do local symlinks but then they're private
<jbenet>
We won't decide on it
<jbenet>
There's a page -- not sure if iana or w3c that lists protocols
<jbenet>
Web* protocols
<jbenet>
We recommend those
<nicolagreco>
and I guess you will rely/follow that list
<jbenet>
We have to clean up the root fs of kernels first though
<jbenet>
mkdir /local; mv /* /local/*
<nicolagreco>
ehehe
<nicolagreco>
actually, I think /home would be enough, no?
<nicolagreco>
(yes I agree that you loose all the other pieces)
<mek_>
Has it been considered to create a github org for either ipfs-projects, ipfs-clients, or ipfs-implementations github (which forks relevant community projects and moves towards cleaning up ipfs?
<jbenet>
mek_ meaning move repos out of the ipfs org itself?
<mek_>
Potentially, to consolidate the core libraries and documentation.
<jbenet>
mek_ we have a ton of repos and we'll keep having more. I think we may add multiprotos and libp2p orgs
<jbenet>
mek_ the concern I have with that is that it creates a reluctance to make small repos. In js it is very common to make one repo per small module
<jbenet>
don't want people thinking they would pollute the namespace and thus not modularizing.
<jbenet>
Github has limitations re repos and pages etc, but search so far has worked. Maybe we can fix this with linking to the core repos from ipfs/ipfs readme
<nicolagreco>
I predict that github sometime soon will restyle organization pages
<mek_>
Yeah, on an organizational front, there are a lot of "notes" and "pm" and "specs" and "awesome" and "ipfs" and "community" and "apps" pages
<nicolagreco>
they are pretty unusable (if it wasn't for search)
expeditiousRubyi has quit [Ping timeout: 246 seconds]
expeditiousRubyi has joined #ipfs
<mek_>
nicolagreco: True. Especially to developers. To someone who is approaching the project for a first time, they are likely looking for a client library or an implementation. It's a bit hard to navigate.
<mek_>
Committing to a single master index (some document which at minimum references the other documents (e.g. community, apps, implementations) could be helpful.
<mek_>
Maybe it even exists, but that fact that it's not immediately obvious to a noob like me might mean something. (just for what its worth)
<jbenet>
mek_ absolutely.
<jbenet>
mek_ suppose there is a project directory-- can you find it?
<jbenet>
(narrowing search space: it's on github)
<mek_>
Apps perhaps? Examples maybe? I see a few projects under ipfs like file-browser... (on page 3 now) Dataviz...
<mek_>
Ah, actually, I do see project directory on the #ipfs page now.
<ipfsbot>
[ipfs] jbenet created move-project-dir-note (+1 new commit): http://git.io/v4xNT
<ipfsbot>
ipfs/move-project-dir-note af60e22 Juan Benet: moved project directory note...
<ipfsbot>
[ipfs] jbenet opened pull request #125: moved project directory note (master...move-project-dir-note) http://git.io/v4xN3
Senji has quit [Read error: Connection reset by peer]
Senji has joined #ipfs
elimisteve has left #ipfs [#ipfs]
<ricmoo>
Are there any peeps out there that are familiar with the wire protocol? I have handshake working, but can't figure out what the first 11 bytes of a message in the header, and then how to interpret the payloads...
jhulten_ has joined #ipfs
<jbenet>
ricmoo: wait-- what language are you doing this in?
<ricmoo>
Python
<jbenet>
ricmoo the handshake is unfortunately complicated. take a look at how dev0.4.0 or js does it.
<ricmoo>
I have completed the handshake.
<jbenet>
ricmoo it uses multistream and multicodec
<jbenet>
ricmoo with go-ipfs@0.3.8 or 0.4.0?
<jbenet>
because it changed.
jhulten_ has quit [Ping timeout: 240 seconds]
<ricmoo>
I am trying to understand the decrypted bytes... For example, I get a "\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01" followed by the next message of "\x0f"...I have established the last byte(s?) of the first message indicate length of the next...
<ricmoo>
Oh? One sec... I'll check my version.
<ricmoo>
(I agree, handshake was ridiculously hard to implement. :))
<jbenet>
there's a double go-msgio in 0.3.8-- removed in 0.4.0. then there's multistream.
<jbenet>
it's unfortunately hard because we're trying to be super friendly to people upgrading the protocol, using other cipher suites and other transports
<ricmoo>
ipfs version 0.3.10-dev
<ricmoo>
Does that mean I need to re-implement everything again? :(
<jbenet>
not everything, but quite a bit. this is why it is very important for the py effort to sync with the rest....
<jbenet>
did you take a look at the specs? daviddias put in a lot of work recently to make those reflect the current (0.4.0+) state
<jbenet>
ricmoo yeah-- that handshake is a TLS alternative, but since it doesn't have the weight of a long audit from the crypto community we're not comfortable recommending it. we may keep it as an option.
<jbenet>
(multistream allows this)
<ricmoo>
Those aren't the messages I'm seeing though...
<ricmoo>
I am completely on-board with using TLS. :)
<ricmoo>
Getting ECDH and all hashed goodness in the right order was not pleasant. :) Is there also going to be a move away from RSA? Because that was also hair (compared to EC)
<ricmoo>
My Python implementation was just to orient myself with the wire protocol, so I could start writing it in Objective-C... But maybe you have a better suggestion? Basically all I'm really looking to accomplish is 2 tasks (for now); a) Find a bunch of nodes (via bootstraping) and b) get blocks by their ipfsPath and ipns...
<ricmoo>
(also, I want to remain using randomly selected nodes, not run my own gateway or rely on a specific gateway... Truly decentralized)
<jbenet>
ricmoo: sorry the handshake is notoriously difficult, becuase we're being -- as mentioned -- very friendly to the future and to other protocols.
<jbenet>
ricmoo: once you wrap your head around multistream and how it works-- its easy actually
<ricmoo>
Yup, that's no problem. IPFS I think will change the world, so getting it as right as possible it key. :)
<ricmoo>
So, are the bytes I'm getting back currently multi stream? Or do I need 0.4.0 for that?
<ricmoo>
Because it doesn't look like the messages in the documentation...
M-davidar is now known as davidar_
<ricmoo>
I mean, they are in there, interlaced with some header-ish things?
<davidar_>
mek_: ok, I'll read through it soon
<mek_>
davidar_: No rush, the average lifespan of a webpage is 44 days
<daviddias>
then you will see a tls handshake
<daviddias>
when we dial to a peer, we try to multistream handshake into spdy, and once that is agreed, then you see the spdy handshake, and so on
<daviddias>
ricmoo: identify is one of the protocols we handshake on a dialed connection, it enables us to tell the other peer the multiaddr we see him and with that telling that peer their public address if they are behind a NAT
<daviddias>
it is kind of a STUN service
<daviddias>
at the peer level
<davidar_>
mek_: I'm just about to follow up with citeseerx on my request of mirroring their collection to ipfs
<ricmoo>
So, given a message, which is a protobuf, how to you know which protobuf (eg. dht, identify, etc) to parse it with?
<mek_>
davidar_: I'm reading BASE-3 -- there's a lot here.
<daviddias>
ricmoo: yes :) , instead of doing Protocol Muxing on the port level, and therefore having to agree in ports beforehand
<davidar_>
mek_: once we have ipfs-cluster going, storage should be less of a concern, as we'll mostly be using it for seeding
<daviddias>
we open a connection to a peer
<daviddias>
upgrade that connection to something like SPDY
<daviddias>
so that we can multiplex several streams in one connection
<davidar_>
but that assumes people volunteer enough disk space for these things :)
<daviddias>
and in each of those streams, we handshake a different protocol
<mek_>
davidar_: I figured along those lines, but to get us going (especially if we want to be ambitious about amt of coverage of a specific domain...
<ricmoo>
I think I understand... I need to read this over again now. :)
<davidar_>
mek_: yeah, definitely
<davidar_>
jbenet: ipfs-cluster is still a while off anyway, I'm guessing?
<mek_>
davidar_: I think jbenet would probably agree that if we can get the Archive to do an experiment revolving around wayback, Brewster et al will be sold
NightRa has joined #ipfs
<mek_>
And hopefully could contribute a lot of storage, by virtue of them needingto solve this problem. That's the goal, at least.
<mek_>
Still trying to learn the best people to direct questions to so I'm not always nagging jbenet. Do the concepts of "identity", "post", and "sharing" all fall under the jurisdiction of *ipld*?
<haadcode>
morning
<mek_>
Morning :o)
<jbenet>
sorry ill brb need to relocate
<mek_>
davidar_: nice email, thank you :o)
<davidar_>
mek_: of course I only notice the typo once I've sent it :p
<daviddias>
ricmoo: cool
<davidar_>
mek_: yeah, wayback on ipfs would be cool
<haadcode>
mek_: reading through your notes, looks like you're building something similar as I do, so maybe this will give you some ideas https://github.com/haadcode/anonymous-networks. if you want to bounce ideas or discuss, let me know. I've gone through the same question you have in your notes, so I might be able to provide some input. btw. what exactly are you building? :)
<davidar_>
i.e. if you only need to access a small section of the database, that's all you need to retrieve
<mek_>
Gotcha, granular access, relating to the query DSL, right?
<davidar_>
mek_: yeah, and the query model is basically JSON addressing
<davidar_>
so it's similar to a basic nosql system i guess
<mek_>
Makes sense.
<davidar_>
which you can then use to build more sophisticated stuff on top of
<davidar_>
mek_: personally I'm interested in putting a full-text index for all of our archives onto it, but that's probably a little while off yet
<mek_>
I have a background in search + access to tons of book data, so that could be fun to team on.
<mek_>
Also, Greg Lindahl at the Internet Archive will be a great resource + contact.
<mek_>
In terms of building more sophisticated stuff over idpl... My goal this week is to contribute towards a spec of "POST" (core) data structure, as well as "POST" (ext) extensions, e.g.: todo-items/tickets, chat-msgs, calendar/schedule/invitations, and blogs/articles/edu papers
<mek_>
s/idpl/ipdl
<davidar_>
mek_: awesome
<davidar_>
s/ipdl/ipld? :p
<mek_>
Yeah, a perl style regex replacement!
<mek_>
s (replace) ipdl (with) ipld
<davidar_>
mek_: what do you mean by "POST"?
<davidar_>
mek_: i mean, you replaced it with another typo :p
<mek_>
Oh
* mek_
facepalms
<davidar_>
multivac actually used to execute find-replace statements, but people complained so I turned it off :(
<mek_>
davidar_: Ah, this is jbenet's idea of having a data structure to represent a common "post" of content. Which could be a blog post. Or a message from one person to another, etc.
<davidar_>
oh, gotcha
<davidar_>
mek_: is this related to the archives metadata format stuff?
<mek_>
davidar_: "POST", you mean?
<mek_>
Or content for full text search
<mek_>
"POST", I imagine, would be a special "type" of object which encodes information like "from", "to" (almost like asking the question... how could an email be sent over IPFS)
<mek_>
Just a convention which all services can adhere to, so they can be interoperable with each others' data.
<davidar_>
aha
<davidar_>
yeah, sounds relevant to the stuff haadcode has been working on
<mek_>
Cool! Excellent.
<mek_>
haadcode: Would love to help out and hear our ideas
<davidar_>
let's bring back djc's Internet Mail 2000 ;)
<davidar_>
*djb
<mek_>
Ha ha ha
<rendar>
davidar: IPLD is basically "dump your database into IPFS" -- so with IPLD basically you can have an entire relational db ported into ipfs objects?
<mek_>
davidar_: We were talking about "POST" having #tags, and tags coming from RDF entities on Wikidata (which is where Freebase went)
<davidar_>
rendar: not relational, JSON
<rendar>
davidar: oh, ok
<davidar_>
rendar: but you could build a relational db on top of that, i believe some people are interested in SQL on ipfs (jbenet?)
<mek_>
The pitch is... Imagine a chat client built over IPFS and the "POST" (core) data structure. You can imagine there being a UNIVERSAL conversation going on, instead of being in a specific IRC channel.
<mek_>
Everyone received the same massive blob of text.
<davidar_>
mek_: yeah, definitely talk to haadcode
<mek_>
But then you can filter your context (stream of text) based on tags
<davidar_>
I'm also trying to get matrix.org onto ipfs
<haadcode>
sounds similar
<davidar_>
ping Matthew, Erik
<davidar_>
they probably aren't awake yet
<mek_>
Wait, what. Why haven't I heard about matrix.org
<davidar_>
mek_: also chat.ipfs.io ;)
<davidar_>
mek_: have you been living under a rock? :p
<davidar_>
it's still quite new I guess, but then again so is ipfs
<mek_>
I think I have been.
<davidar_>
mek_: unfortunately it got left out of that "opensource alternatives to slack" post on HN recently :(
<jbenet>
back -- davidar_ ipfs-cluster-- no one has started implementing it yet. it's not super hard, just not fully speced/started
<davidar_>
i think the author was going to fix that though with a follow-up post
<jbenet>
davidar_ i think we've discussed POST before, not sure, maybe not. it's a minimal data structure for "communications" with merkle linking
<davidar_>
jbenet: mek_ was asking if storage is a concern for us ;)
<jbenet>
haadcode: did you make a chat client?
<davidar_>
jbenet_: is it something the matrix folks would be interested in helping with?
<davidar_>
*jbenet
<haadcode>
jbenet: yes, see the link ocuple of lines above
<davidar_>
got used to the trailing underscores :p
<davidar_>
jbenet: don't tell me you haven't seen it yet :p
<jbenet>
davidar_ mek_ it's not a concern per-se, because we can scale up, but hey-- more storage always useful
<jbenet>
haadcode ok are you willing to use POST? it will make your life way easier. mek_ and i will write up stuff on it this week-- welcome to join the effort--
<ricmoo>
A ha! Okay... That totally explains, I think, to a large degree the stuff I be seeing. :)
<mek_>
haadcode: Is this running somewhere?
<mek_>
If now, would you like it to be?
<ricmoo>
Thanks! :)
<mek_>
s/now/not
<haadcode>
jbenet: nope, not on the list. yes, would love to join the effort of POST. need to understand first what you're planning, IPLD is the spec for it?
<haadcode>
mek_: it's running yeah. just clone the repo and hit connect :)
<davidar_>
haadcode: submit a PR to add it to the list
<haadcode>
davidar_: k, will do so today.
<jbenet>
haadcode: no, POST builds on IPLD.
<jbenet>
daviddias: is npm-over-ipfs working well enough to use?
<jbenet>
haadcode: POST is like unixfs (a datastructure on ipfs) but for defining "communications" {posts, articles, papers, blogs, tweets, ...}
<haadcode>
jbenet: ok. you have any notes or discussion as to what you plan to do? or is this the beginning of it?
<jbenet>
haadcode: it's roughly formed, but not yet put to paper. mek_ took a bunch of awesome notes today
<haadcode>
been thinking about something more general thank LL
<haadcode>
jbenet: ah, ok! so the notes I was looking at were the notes :) cool, got it.
<haadcode>
mek_: is that using ipfs?
<daviddias>
It is cloning npm well and spawns up a registry that can serve the npm cli. Not cloning and using directly IPFS has a very slow start because cloning the DAG node with npm registry listing is being crazy slow or sometimes weird
<mek_>
haadcode: Nope.
<mek_>
But I was hoping to build something using ipfs
<mek_>
So, let's work together.
<mek_>
Language doesn't matter to me.
<haadcode>
yeah!
<mek_>
Let me get your code running.
<jbenet>
haadcode: "(async(() => {" wat do?
<jbenet>
is it my node version?
<daviddias>
Fat arrows are the new function keyword
<jbenet>
victory for the (jashkenas) horde!
<mappum>
they are slightly different though, they don't have their own 'this' context
<jbenet>
oh
<jbenet>
bummer
<daviddias>
Use node.js 4
<jbenet>
i always thought coffeescript got functions + scoping really well.
<jbenet>
daviddias how do i use victor's n fix?
<daviddias>
Fix?
<mappum>
well that's useful in a lot of cases, you dont have to do '(function () {}).bind(x)', or 'var self = this; function () { ... self.whatever }'
<mek_>
We should (theoretically, assuming new people could interop w/ irc) use something like this instead of irc
<davidar_>
mek_: matrix.org has an irc bridge... ;)
* davidar_
wonder's if haadcode's thing could federate somehow
<haadcode>
federate how?
<davidar_>
haadcode: federate with the Matrix HS network
<davidar_>
once Erik wakes up, I'll ask him for more details :)
<mek_>
jbenet, haadcode, any others interested -- can we set a time to sit down over google hangout (or name your favorite service) to discuss "POST" + helping build out this chat app?
<davidar_>
at the very least it could probably be bridged though
<grahamperrin>
If IPFS could be an alternative (optional) method of distribution, then am I correct in guessing that problems such as "mirrors out of sync" could become a thing of the past?
<davidar_>
grahamperrin: yeah, ipfs makes syncing mirrors really easy
gperrin has joined #ipfs
<davidar_>
and even if a mirror is out of sync, it can just pull files from anywhere in the network
<davidar_>
so, as long as you have the most recent root hash, you'll always get an up-to-date mirror
<grahamperrin>
davidar: thanks, I thought so. Now I"m seeking a page that might sum it up neatly to a reader whose mindset might be 'stuck' in CDN mode
<davidar_>
pierron: ah, hadn't come across that yet
<pierron_>
davidar_: grahamperrin: This was our conclusion too, distributing sotfware in the same way as we are doing with static server should work only with ipns & ipfs.
<pierron_>
davidar_: grahamperrin: But one of the concerned that I had was that decentralized sources of packages, means that in the short time that you need to update your computer somebody can attack you by looking at the fact that you are pulling data, in which case running behind Tor might be needed for the clients.
<davidar_>
pierron: I was thinking about that. Can't someone already do that with traditional CDNs, by snooping on someone's traffic?
<davidar_>
although i guess that's a more difficult proposition
<pierron_>
davidar_: yes, but CDN are usually hold by trusted persons, I would hope.
<pierron_>
davidar_: I I would also think that CDN are not communicating in clear.
<davidar_>
a lot of mirrors are just plain http
<davidar_>
apt-get can't even handle https by default iirc
* davidar_
bbiab
jhulten_ has joined #ipfs
martinBrown has quit [Quit: -]
<jbenet>
pierron_ where was IPFS mentioned at nixcon? what talk?
<jbenet>
(have a link to the mention?)
dignifiedquire has joined #ipfs
martinBrown has joined #ipfs
<emery>
jbenet, just in passing by Eelco, and then between talks
<jbenet>
grahamperrin: we would love to help make this happen.
<pierron_>
jbenet: in the introduction talk from Eelco Dolstra.
<jbenet>
emery: ah -- was it a good mention or bad? would love to solve whatever problems were seen?
<emery>
good mention, I think its on the todo list, but its a long todo list
<jbenet>
(i saw some email a while back from someone who didnt understand the ipfs model saying it wouldnt work or something)
<pierron_>
jbenet: I am no longer sure he mentioned it by name, but he explictly mentioned a decentralized way to share build outputs.
<jbenet>
emery: right makes sense
<jbenet>
btw, we don't _need_ ipns names yet-- we can do everything with just ipfs + dns (people in these systems rely on DNS anyway already)
<jbenet>
ipfs + dns actually works really well, with low TTLs.
jhulten_ has quit [Ping timeout: 265 seconds]
<emery>
I remember Eelco mentioning it, and he said later that there is someone else in the project that was interested publishing binaries with it, but I don't remember who
<jbenet>
once we have better signed records (iprs) can even do a dnslink to a signed thing so dont even have to trust DNS correctness.
<dignifiedquire>
davidar: are you still giving a talk?
<dignifiedquire>
sorry I meant daviddias
<emery>
jbenet, its at ~25:00
<pierron_>
emery: the someone else might have been me ;)
<emery>
ah, ok
<jbenet>
woo \o/
<pierron_>
emery: if you are referring to Tuesday
<emery>
i ask eelco about it after the talk and he said someone was interested
<pierron_>
emery: jbenet: A master student was looking for a subject, and IPFS was mentioned as a way to solve the distributed binary cache.
<emery>
that would be nice
<pierron_>
This was 2 weeks ago.
* daviddias
<daviddias>
dignifiedquire: finished :)
<dignifiedquire>
daviddias: nice :)
<jbenet>
pierron_ we would love to help with this stuff. and please bear with any perf problems-- we have a _ton_ of room to optimize. we've been focusing on features more than optimizing perf.
<jbenet>
(like its ok, but it can be way better)
<dignifiedquire>
got something cool to show hopefully tonight, in the meantime could you do me a favor and check https://github.com/ipfs/js-ipfs-api/pull/130 cause I currently have to depend on my fork in my projects
<emery>
I'd like to help but I'm trying to leave the land of unix, so I don't have that much time anymore
<jbenet>
pierron_ random question -- what tooling do nix and nixos use for testing cross arch/platform compatibility? equivalent of http://build.golang.org/
<pierron_>
jbenet: we have a build farm which compiles natively on each architecture.
<reit>
where you have a ipfs.js frontend that you pass your site's hash into and the js spins up a node and loads it, but you can also take that hash and use your local daemon with it
<victorbjelkholm>
daviddias, wooo! Can't wait for the Pastel de Nata!
r04r is now known as zz_r04r
jhulten_ has quit [Ping timeout: 260 seconds]
dignifiedquire has joined #ipfs
Encrypt has quit [Quit: Eating time!]
<haadcode>
back
<haadcode>
daviddias: the server went nuts, thus the hang-up. fixed it and now it's back online. if I'd had to guess, there's a bug there somewhere (on the server side) ;)
simpbrain has joined #ipfs
voxelot has quit [Ping timeout: 265 seconds]
<haadcode>
daviddias davidar_ mek jbenet thanks for testing! found a breaking bug due to having such a huge load on the network at once (huge > 4 people at once ;))
<dignifiedquire>
daviddias: pretty sure I know what’s the problem in #131, you are comparing a string with a buffer, which fails for obvious reasons, trying to figure out a way to compare the two without having nodejs run out of memory and crashing (which I already had happen now twice :D)
<dignifiedquire>
daviddias: okay we need to use a smaller file..
voxelot has joined #ipfs
voxelot has joined #ipfs
<ipfsbot>
[js-ipfs-api] Dignifiedquire pushed 2 new commits to test/bigfile: http://git.io/v4hfk
<ipfsbot>
js-ipfs-api/test/bigfile 093de9b dignifiedquire: test: Use stream comparison instead of strings
<ipfsbot>
js-ipfs-api/test/bigfile 48b0207 dignifiedquire: fix: Do not parse non json responses
<daviddias>
awesome, was on that as well
<dignifiedquire>
I think if you make the file sth like 10mb the tests will parse in a reasonable amoun to time
* dignifiedquire
can’t type..
<haadcode>
daviddias dignifiedquire hey re. the big file issue, I reckon you should use a file > 256MB because that's the max string size in node.js, so if anywhere we try to convert the full buffer to a string, it'll fail and testing with a large than 100mb file would catch that. just a thought, came across such a bug this week when js-ipfs-api was still using request.
eaterof is now known as eater
<dignifiedquire>
haadcode: yes, but then we can’t actually check for correctness of the contents because that takes ages cating a file of that size, so we can just start cating it and stop if we see it’s a stream that’s continually running and not throwin an error
zz_r04r is now known as r04r
<haadcode>
dignifiedquire: catting a > 1gb file into the stream buffer took maybe 10s on my machine (running a local ipfs daemon)
<dignifiedquire>
haadcode: yes but cating the file through ipfs + reading the file from the file system at the same time already takes > 60s for me on my machine for 100MB file
<haadcode>
cat to fill the buffer and read from fs to compare the data?
<dignifiedquire>
streaming both into a buffer and comparing the data
<ipfsbot>
[js-ipfs-api] diasdavid deleted test/bigfile at 4cb5a2c: http://git.io/v4hEp
r04r is now known as zz_r04r
rombou has joined #ipfs
jabberwocky has quit [Remote host closed the connection]
jabberwocky has joined #ipfs
<haadcode>
daviddias: hmmm ok. you're the first one to have connection problems to the server that I know of so I'm unsure what's going on. do you have aggressive firewall? is there anything in the terminal (nodejs) log?
<daviddias>
haadcode: I'm on a "public space wifi"
<daviddias>
so, maybe?
border0464 has joined #ipfs
<daviddias>
node.js terminal doesn't print any error
<haadcode>
last line is "connecting to ..."?
e-lima has quit [Ping timeout: 272 seconds]
rombou has quit [Ping timeout: 240 seconds]
zz_r04r is now known as r04r
zmanian_ has quit [Ping timeout: 272 seconds]
jhulten_ has joined #ipfs
e-lima has joined #ipfs
<nicolagreco>
I am now playing with implementing sundr
<dignifiedquire>
daviddias: have you seen https://esdoc.org ? it could be an even better solution for getting good docs on js-api
jhulten_ has quit [Ping timeout: 272 seconds]
rombou has joined #ipfs
HostFat has joined #ipfs
<dignifiedquire>
daviddias: looking into jsdoc@3 looking pretty nice actually
dignifiedquire has quit [Quit: dignifiedquire]
rombou has quit [Ping timeout: 246 seconds]
disgusting_wall has joined #ipfs
MaPi_svk has joined #ipfs
zmanian_ has joined #ipfs
jabberwocky has quit [Ping timeout: 260 seconds]
evanmccarter has joined #ipfs
rombou has joined #ipfs
dignifiedquire has joined #ipfs
disgusting_wall has quit []
pfraze has joined #ipfs
disgusting_wall has joined #ipfs
voxelot has quit [Ping timeout: 272 seconds]
e-lima has quit [Ping timeout: 260 seconds]
edwardk has quit [Ping timeout: 240 seconds]
geir_ has quit [Ping timeout: 240 seconds]
bret-raspi has quit [Ping timeout: 240 seconds]
mikolalysenko has quit [Ping timeout: 240 seconds]
mvr_ has quit [Ping timeout: 240 seconds]
yosafbridge has quit [Ping timeout: 240 seconds]
daviddias has quit [Ping timeout: 240 seconds]
RJ2 has quit [Ping timeout: 240 seconds]
sindresorhus has quit [Ping timeout: 240 seconds]
NightRa has quit [Ping timeout: 240 seconds]
tperson has quit [Ping timeout: 240 seconds]
zrl has quit [Ping timeout: 240 seconds]
bret-raspi has joined #ipfs
erikj has quit [Ping timeout: 240 seconds]
ogd has quit [Ping timeout: 240 seconds]
Nitori has quit [Ping timeout: 240 seconds]
leeola has joined #ipfs
mvr_ has joined #ipfs
edwardk has joined #ipfs
NightRa has joined #ipfs
zrl has joined #ipfs
daviddias has joined #ipfs
RJ2 has joined #ipfs
mikolalysenko has joined #ipfs
sindresorhus has joined #ipfs
Nitori has joined #ipfs
tperson has joined #ipfs
yosafbridge has joined #ipfs
simpbrai1 has joined #ipfs
simpbrain has quit [Ping timeout: 240 seconds]
dignifiedquire_ has joined #ipfs
erikj has joined #ipfs
pfraze has quit [Remote host closed the connection]
ogd has joined #ipfs
substack_ is now known as substack
<ipfsbot>
[js-ipfs-api] Dignifiedquire opened pull request #132: [WIP] jsdoc3 for docs (master...jsdoc) http://git.io/v4hjl