infinity0 has quit [Remote host closed the connection]
ckwaldon has joined #ipfs
infinity0 has joined #ipfs
infinity0 has quit [Changing host]
infinity0 has joined #ipfs
ashark has quit [Ping timeout: 276 seconds]
espadrine_ has quit [Ping timeout: 276 seconds]
apiarian has quit [Ping timeout: 244 seconds]
<whyrusleeping>
lgierth: thats good to hear :)
JesseW has quit [Ping timeout: 265 seconds]
apiarian has joined #ipfs
cemerick has quit [Ping timeout: 240 seconds]
fxrs has quit [Quit: Leaving]
dignifiedquire has quit [Quit: Connection closed for inactivity]
i[m] has joined #ipfs
<i[m]>
Greetings all, why only 2048 RSA?
rgrinberg has quit [Ping timeout: 276 seconds]
apiarian has quit [Ping timeout: 265 seconds]
apiarian has joined #ipfs
<i[m]>
Oh nevermind I see there are plans to move to 25519.
<whyrusleeping>
i[m]: Yeah, and 2048 is just the default, you can specify -b=4096 on init for a bigger rsa key
<whyrusleeping>
we're hoping to have 25519 this year, it will make a lot of things really nice
<i[m]>
Oh thanks that's awesome!
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
rgrinberg has joined #ipfs
gmcquillan__ has joined #ipfs
gmcquillan__ has quit [Quit: gmcquillan__]
rgrinberg has quit [Ping timeout: 250 seconds]
rgrinberg has joined #ipfs
gmcquillan__ has joined #ipfs
mgue has quit [Quit: WeeChat 1.5]
mgue has joined #ipfs
<whyrusleeping>
Hey everyone! we released the fourth release candidate for ipfs 0.4.3
<whyrusleeping>
This one is going to be the last one (unless like, the apocalypse comes and i have to patch ipfs to handle demons invading over tcp/ip)
<whyrusleeping>
If you could all give it a try, test it out, stress it out
<whyrusleeping>
have some fun with it
<whyrusleeping>
that would be much appreciated
<richardlitt>
Woot!
<panicbit-M>
Don't jinx it 😉
<whyrusleeping>
:D
<whyrusleeping>
We have prebuilt binaries up on dist.ipfsio
<whyrusleeping>
you'll have to click over to 'all versions' to see them, since they arent yet the 'latest release'
<Kubuxu>
0.4.3 had to change repo format and needs a migration
<victorbjelkholm>
Kubuxu, yeah, sure, that's not a problem, problem is that it's saying that it didn't find any migrations, then it proceeds to download :)
<Kubuxu>
it didn't find them locally
<victorbjelkholm>
oooh, I see.
<victorbjelkholm>
seems to have changed in this very version so nothing to change then, wanted to make it more clear
ppham has joined #ipfs
ppham has quit [Ping timeout: 244 seconds]
WhiteWhaleHolyGr has joined #ipfs
<WhiteWhaleHolyGr>
i just installed ipfs a few minutes ago. any suggestions on what to do with it?
<reit>
make a homepage for your cat
<WhiteWhaleHolyGr>
this is going to be really noob but how do i do that?
<victorbjelkholm>
WhiteWhaleHolyGr, create your website as normal, then add the entire folder with an index.html in it with "ipfs add -r mywebsite/" , you'll get a hash back that you can use :) If I were you, I would take a look at all the examples for some inspiration: https://ipfs.io/docs/examples/
<WhiteWhaleHolyGr>
how does the ipfs community feel about copyright law?
nycoliver has joined #ipfs
<victorbjelkholm>
nvm, got it with "bs58.encode(el.multihash).toString()"
nycoliver has quit [Ping timeout: 252 seconds]
<haad>
victorbjelkholm: for the hash: bs58.encode(el.multihash()). there might be a convinience method for this but can't remember top of my head.
ppham has joined #ipfs
<dignifiedquire>
mh.toB58String(el.multihash) where mh = require('multihashes')
espadrine_ has joined #ipfs
ppham has quit [Ping timeout: 252 seconds]
ppham has joined #ipfs
PseudoNoob has joined #ipfs
computerfreak has joined #ipfs
<victorbjelkholm>
dignifiedquire, ah, multihash is better to use than bs58 that I pasted above?
<victorbjelkholm>
thanks guys
<dignifiedquire>
it's doing the same thing, but easier to remember and more consistent
<victorbjelkholm>
another question, in js-ipfs, is the cli command for cat supposed to work? All tests are passing but I see no output when doing "files cat"
<victorbjelkholm>
dignifiedquire, yeah, makes sense
ppham has quit [Ping timeout: 240 seconds]
computerfreak has quit [Client Quit]
<dignifiedquire>
hmm no idea why cat wouldn't work if the tests pass
<dignifiedquire>
-.-
<dignifiedquire>
the tests for cat don't actually test if it is working properly
<haad>
dignifiedquire: victorbjelkholm: (cc daviddias) shouldn't we do el.multihash.toBS58String() to make sure there's an intuitive conversion available for the users?
<haad>
or even better, combine with the getters: node.Hash.toBS58String()
<dignifiedquire>
multihash is just a Buffer instance, so I don't see a good way of doing that
espadrine_ has quit [Ping timeout: 276 seconds]
<haad>
dignifiedquire: you could, for example, do a getter for .multihash in DAGNode: get multihash() => mh.toBS58String((internalMultihashBuffer)
<victorbjelkholm>
dignifiedquire, ok, I'll take a look at it. Thanks
<dignifiedquire>
victorbjelkholm: would be awesome, thanks especially adding tests that actually verify it working
<victorbjelkholm>
regarding which way, feels better that we have functions that accepts data and outputs the hash, rather than attaching the transform to the buffer/string itself
<dignifiedquire>
in my opinion a multihash should always be a buffer, unless you are printing it to the console
<victorbjelkholm>
dignifiedquire, makes sense to me too
<haad>
dignifiedquire: perhaps. you're prolly right on this and what I'm suggesting goes more under the getters that we've talked about
<haad>
what I'm saying is that there's should be an easy, intuitive way to convert a data structure to "supported" formats
<haad>
if you pass objects, fine, no need for encapsulated encoding methods, but if you pass a data structure (eg. instance), they should provide convinience methods
<dignifiedquire>
that's what the multihashing is about
<dignifiedquire>
*multihashes
<dignifiedquire>
you give it the raw version ( a buffer ) and it gives you back the format you want, eg. mh.toB58String(raw)
<haad>
dignifiedquire: no, what you're providing with multihashes is a library to enable those encodings. what I'm saying is that should be part of the API of the data strcture (DAGNode)
<dignifiedquire>
why?
<dignifiedquire>
it's a leak of abstraction if a DAGNode has to know how to convert a multihash buffer to a string
<haad>
umm, no? it's called encapsulation :)
<haad>
you could also argue it's leak of abstraction if DAGNode has to know how to return a buffer
<haad>
(following the same logic)
<dignifiedquire>
yes but you don't want to encapsulate the knowledge about string conversion of the multihash inside the dagnode, you want to have that encapsulated in something like multihashes
<dignifiedquire>
the dagnode just needs to know the minmal interface to a multihash, in this case it being a buffer
<haad>
dignifiedquire: again, going back to my original point, if you expose a data structure that is not a primitive, a good API design provides methods for converting the data structure between formats. so if you think DAGNode shouldn't know about conversions,then DAGNode shouldn't be a publicly exposed data structure.
<dignifiedquire>
again I disagree I'm afraid, the DAGNode should only know about conversion of itself, not all it's subitems.
<dignifiedquire>
there can be a dagnode.toString() method, but if you want the hash to string, you would have dagnode.hash.toString() or mh.toString(dagnode.hash)
<haad>
dignifiedquire: ok, we're getting on the same page --> "you would have dagnode.hash.toString()" exactly, this is what I'm going after
<dignifiedquire>
that already exists today
<dignifiedquire>
buffer.toString() is a method
<dignifiedquire>
as of today multihash is a type alias for Buffer
<dignifiedquire>
so we can not put additional functionality on top of it
<haad>
dignifiedquire: and what does it return? I guess the example above is slightly off since we were talking about BS58 strings, so I suppose the question is, do we have dagnode.hash.toBS58String()?
<dignifiedquire>
if you want to go the functional route, as we currenlty do you have conversion methods not attached to the data, i.e.mh.toB58String(multihash)
<dignifiedquire>
if you want to go the object oriented route you make multihash inherit from buffer and extend it with a .toB58String() method
<dignifiedquire>
those are two fundamentally different approches on how to handle this, and there is no right or wrong, just two different ways of doing thigns
<dignifiedquire>
victorbjelkholm: can you please file an issue for the broken cat with a description?
<dignifiedquire>
got to run now :)
<haad>
dignifiedquire: ok, I understand your perspective. so, let's provide an OO way of doing this, too, if that's not the case as of today.
<dignifiedquire>
haad: then open an issue on multiformats/multihashes and write you suggestiona there on how you want multihashes to change
ppham has joined #ipfs
<haad>
dignifiedquire: nope, from developers point of view this is not an issue with multihashes, this is an issue with whatever-repo-contains-our-js-ipfs-api-currently
<haad>
dignifiedquire: and I'm giving you feedback atm as a user, so take it as such and into consideration in your next planning session for the js-ipfs API
<haad>
cc davidar ^
<haad>
oops not davidar, meant daviddias
<dignifiedquire>
no haad multihashes is the package with the constructor for multihashes as it is used eveywhere as of today, so if you want all highlevel packages to change the change has to happen there first
<dignifiedquire>
because we don't want two differe multihashes in our code base
ppham has quit [Remote host closed the connection]
<dignifiedquire>
the change is motivated through the high level usage you are right with that, but if we want a consistent change we nee to change the object representing a multihash in general, which is defined in multihashes
<haad>
dignifiedquire: what I'm saying is that from developers perspective this is an issue with the public API and as such, I don't know (and don't need to know) the internals of the API. I should have a place to raise issues (logically it would be js-ipfs, but in reality this is not the case) without knowing how it effects the internal (implementation details) of the module or where the internal modules are located. I don't mean to come off wrong, I mean we should pr
<dignifiedquire>
I want miss typed languages..
<haad>
dignifiedquire: this is not on you is what I mean :) we need to keep thinking about our process in this regard too as we proceed and more developers start using it as well as contributing to it. making it simple & quick to provide feedback is in our best interest.
<dignifiedquire>
haad: right but multihashes are bigger than ipfs, that is why the feedback would be redirected there in any case, even if someone opens the issue on js-ipfs
<haad>
dignifiedquire: sure, I understand what you're saying and agree. I'm trying to provide a "non-ipfs community member" type of perspective to this.
<dignifiedquire>
okay :)
<dignifiedquire>
you could also open an issue on interface-ipfs-core because you want ipfs-api to change as well :P
<dignifiedquire>
soo many repos, so much wow, so much confusion
<dignifiedquire>
code doge approved
<haad>
dignifiedquire: I know! :D and all this kind shows my point :)
<dignifiedquire>
haad: what do you think about mono-repos like babel and pouchdb have?
<dignifiedquire>
thanks victorbjelkholm
<victorbjelkholm>
dignifiedquire, I don't think it would be an bad idea for us to discuss if it's something we want (mono-repos), everything is spread out very thin atm
<victorbjelkholm>
think we were a bit too fast in splitting everything, usually you do that once you feel a pain of having lots of interchanged code in the same repo
cjd has left #ipfs [#ipfs]
<haad>
dignifiedquire: well... I might be alone on this one, and it might be contrarian but I love mono-repos :) I've used both (mono and gazillion small ones) and monos always come out as easier to use and maintain inthe long run. haven't thought about my position on it in the context of ipfs.
<dignifiedquire>
My initial thinking was to have one for js-ipfs and one for js-libp2p, as it was super painful to work through all the tiny repos when doing something large like the pull-stream change
<haad>
dignifiedquire: actually, I've been meaning to pull back the various orbit-db-* modules back to orbit-db's main repo (while publishing them as individual modules) because of the pain of updating something in some module and it ends up being a whole chain of ten repos that need to be updated :/
<victorbjelkholm>
yeah, + it's always harder to get into contributing when everything is everywhere instead of everything in one place
<haad>
dignifiedquire: yeah, a split on that level makes sense (ipfs vs. libp2p)
<haad>
dignifiedquire: I would probably *love* that given how hard it is now to know where things are and which modules need updating when trying out new version of js-ipfs. and the "where to report issues" and "where's the API documentation" are all related to it...
<dignifiedquire>
wa nt to open an issue? ;) describing your experiences with mono repos
zorglub27 has joined #ipfs
sametsisartenep has joined #ipfs
sametsisartenep has quit [Client Quit]
<haad>
dignifiedquire: unfortunately I don't have time to write that long of an issue in the next few days :/ I'm chrunching on the devcon demos and running out of time... will have to wait, but let's keep it in mind as I think we'll come across the topic again sooner or later :)
<dignifiedquire>
haad: no rush would just like to have a basic discussion on gh before we meet again
dmr has quit [Ping timeout: 250 seconds]
sametsisartenep has joined #ipfs
wuch has joined #ipfs
ppham has joined #ipfs
Encrypt has joined #ipfs
lidel has joined #ipfs
<haad>
I'll try to remember to open an issue
<haad>
don't count on it though
espadrine_ has joined #ipfs
<dignifiedquire>
:)
dmr has joined #ipfs
sametsisartenep has quit [Read error: Connection reset by peer]
dmr has quit [Client Quit]
sametsisartenep has joined #ipfs
nycoliver has joined #ipfs
Encrypt has quit [Quit: Quitte]
nycoliver has quit [Ping timeout: 250 seconds]
sametsisartenep has quit [Quit: leaving]
sametsisartenep has joined #ipfs
sametsisartenep has quit [Client Quit]
corvinux has joined #ipfs
<whyrusleeping>
victorbjelkholm: did the auto migrations process work well for you?
<victorbjelkholm>
whyrusleeping, yup, tiny repo though but no problems
<whyrusleeping>
nicee
spilotro has quit [Ping timeout: 244 seconds]
sametsisartenep has joined #ipfs
spilotro has joined #ipfs
fxrs has joined #ipfs
cketti has joined #ipfs
espadrine_ has quit [Ping timeout: 276 seconds]
neurrowcat has joined #ipfs
nycoliver has joined #ipfs
<daviddias>
dignifiedquire: what was that project to manage a ton of node.js modules?
<daviddias>
I keep forgetting the name of it
nycoliver has quit [Ping timeout: 276 seconds]
<daviddias>
it had a super awesome website
<daviddias>
victorbjelkholm dignifiedquire haad all of that discussion is super valuable
<daviddias>
We've talked about it a couple of times
chris613 has joined #ipfs
<daviddias>
I always up for increased developer productivity, if the cost of the bikeshed doesn't pass the benefit we get from it :)
PseudoNoob has quit [Remote host closed the connection]
<lgierth>
it's not exactly what you're asking for but meh maybe it helps :)
<sdgathman>
It's close - IPFS in grub. So you need only one boot disc/stick.
<lgierth>
i see
<lgierth>
ipget could maybe do that
<sdgathman>
Not sure if that project is actually grub based.
<lgierth>
if it got some love
<lgierth>
no astralboot is just a dhcp/tftp server
<lgierth>
so the last mile is still unauthenticated
<sdgathman>
I see that
<lgierth>
mh that would indeed be pretty cool, ipfs support in grub
<lgierth>
(for context, ipget is supposed to be(come) the ipfs-version of wget)
<whyrusleeping>
pubsubpubsub hubbub
taaem has joined #ipfs
taaem has quit [Read error: Connection reset by peer]
taaem has joined #ipfs
Guest98413 has joined #ipfs
Guest98413 is now known as Mateon1
<sdgathman>
Speaking of hashes, I need freedos-1.0 to update my BIOS. I *think* this is telling me (under Additional Information VirusTotal metadata) that the image with this hash has been around since 2008.
<sdgathman>
Yeah, Fedora also blocks all network access during official builds.
<sdgathman>
Excellent policy.
<lgierth>
weeelll it treats the symptoms only
<lgierth>
what they probably actually want is to provably verify all dependencies
<sdgathman>
The source is included in the SRPM, and the package metadata includes the hash of all sources objects used in the build.
cketti has quit [Quit: Leaving]
<lgierth>
ah so you still have a means to reference dependencies and don't *have* to vendor them
<lgierth>
cool
<lgierth>
i was thinking "that must be a pain!"
<sdgathman>
Anyway, it may not be the One True Way, but I'm comfortable with Fedora policy - hence I work on packages for it.
<sdgathman>
Yeah, a full Fedora distro with SRPMS (about 50G) includes all the source to rebuild everything from scratch on an isolated dessert island.
<sdgathman>
I like that about git as well.
<sdgathman>
You have local copies of everthing.
<sdgathman>
So being off the grid for a day doesn't slow you down.
<lgierth>
the full fedora distro sounds like a cool thing to put on ipfs
<sdgathman>
I don't know if anyone has actually put it all on an iso.
<lgierth>
but is there a script or something that gets you a directory or two which contain all of it?
<sdgathman>
Yes, there is a directory to download all of it.
<sdgathman>
I'll add that as a side project to packaging ipfs for Fedora.
<sdgathman>
I wonder if the-kenney is working on Fedora, or some other distro.
<sdgathman>
I need to get together if we are on the same distro.
<lgierth>
i think they're working on nixpkg
<lgierth>
judging from the github profile
<sdgathman>
Generally, Fedora is distributed as DVD sized "spins" with packages selected for different application focuses (music, scientific, etc).
<sdgathman>
An "Everything" spin on ipfs might be interesting.
ligi has joined #ipfs
j0hnsmith has quit [Quit: j0hnsmith]
<sdgathman>
Since ipfs supports directory objects, it should still be a directory with a ton of source and binary rpms. That way, you don't have to necessarily download the entire gigantic blob.
<sdgathman>
And a lot of the rpms are unchanged between releases.
<sdgathman>
So they would be shared.
<sdgathman>
And Fedora has a netinstall that would be cool with ipfs support.
<sdgathman>
So lots of projects once ipfs is packaged and stable.
cemerick has joined #ipfs
<lgierth>
yes!! :)
<sdgathman>
In fact, when packaged and stable, I could forcefully argue that it should replace pycurl as the method of fetching packages for the package manager.
<sdgathman>
That would dispense with the network of http/ftp mirrors. (Although might need to keep them in case ipfs is blocked.)
doesntgolf has joined #ipfs
<sdgathman>
Question: are tools like ipget lighter than a full ipfs node? Maybe similar size to wget?
<sdgathman>
So you wouldn't need a full node on the same machine?
zorglub27 has quit [Quit: zorglub27]
<sdgathman>
Another common problem solved by ipfs is updating multiple similar machines at a home or workplace. You want the rpms cached locally - not downloaded from central sources for every box.
cemerick has quit [Ping timeout: 276 seconds]
<sdgathman>
Current solution (e.g. using squid-cache) are problematic.
ljhms has joined #ipfs
Peeves has quit [Read error: Connection reset by peer]
Peeves has joined #ipfs
JesseW has joined #ipfs
ppham has quit [Remote host closed the connection]
ppham has joined #ipfs
ppham has quit [Ping timeout: 250 seconds]
<sdgathman>
shutting down to reflash bios :-P
Peeves has quit [Ping timeout: 250 seconds]
PrinceOfPeeves has joined #ipfs
m0ns00n has joined #ipfs
achin has joined #ipfs
m0ns00n has quit [Client Quit]
Peeves has joined #ipfs
A124 has joined #ipfs
Peeves has quit [Ping timeout: 250 seconds]
guest___ has joined #ipfs
PseudoNoob has joined #ipfs
Peeves has joined #ipfs
<achin>
just getting around to playing with ipfs again
WhiteWhaleHolyGr has quit [Ping timeout: 264 seconds]
<achin>
getting slighly spammed with the `flatfs: too many open files` error
<achin>
so when i look at what's going on via /proc/<daemon_pid>fd i see like 800 open FDs to $IPFS_PATH/blocks
<achin>
not to files within the blocks directory, but to the blocks directory itself
Oatmeal has joined #ipfs
j0hnsmith has joined #ipfs
ppham has joined #ipfs
rgrinberg has quit [Ping timeout: 276 seconds]
j0hnsmith has quit [Quit: j0hnsmith]
Peeves has quit [Ping timeout: 250 seconds]
rgrinberg has joined #ipfs
rendar has joined #ipfs
<richardlitt>
dignifiedquire: thanks for the help
sija has joined #ipfs
Akiiki has joined #ipfs
<Akiiki>
@jbenet @whyrusleeping Why does ipfs daemon stay live on port 8080 even after i close it