lgierth changed the topic of #ipfs to: go-ipfs v0.4.7 is out! https://dist.ipfs.io/go-ipfs | Week 11+12: Documentation https://git.io/vyHgk | Roadmap: https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/vVBS0
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
hoenir has joined #ipfs
hoenir has quit [Client Quit]
HawkeyeTenderwol has joined #ipfs
tmg has quit [Ping timeout: 260 seconds]
<M-anomie> On Ubuntu, how do I keep my version of ipfs up to date?
arpu has quit [Ping timeout: 260 seconds]
kvda has joined #ipfs
<whyrusleeping> install ipfs-update and run it when there are new releases
<whyrusleeping> I think there is a snapcraft release for ipfs too if you use that
hoenir has joined #ipfs
<M-anomie> ...I can't just have a ppa?
<hoenir> Hi guys.
<whyrusleeping> M-anomie: If someone is maintaining a ppa, then sure. But i'm not aware of one
<whyrusleeping> supposedly snapcraft is the new ubuntu packaging system (or so i was told), so theres this: https://github.com/elopio/ipfs-snap
<hoenir> Can anyone explain why the ipfs-go implementation has this wierd import path with /ipfs/{hash/pkg_name ?
<hoenir> I'm interested to contribute to the project.
<hoenir> Can anyone share some links for code guidelines on this project?
<lgierth> thaat looks like a type
<whyrusleeping> hoenir: thats a gx import path
<whyrusleeping> more about gx here: https://github.com/whyrusleeping/gx
<hoenir> why use this path instead of the classic idiomatic go import path like {vcs}/{pkg}/{sub-package}
<hoenir> ?
<hoenir> ok
<hoenir> whyrusleeping: I will take a look now :)
<hoenir> thanks for the fast link
<whyrusleeping> hoenir: because then upstream dependencies change and break ipfs
<whyrusleeping> gx ensures that every time you download and build ipfs youre getting exactly what you expect
<whyrusleeping> no random github repos breaking things
<whyrusleeping> no accidentally pushing a breaking change as a patch update in semver
<whyrusleeping> *exactly* the code we tested and published
<hoenir> ahh and basically you are lock down on a verion of the dependencies in order to not brake ipfs also. Which is a good reason.
<whyrusleeping> yeap!
<hoenir> never heard of the gx pkg manager
<whyrusleeping> i wrote it after getting really really frustrated with all the existing go packaging tools
<hoenir> yeah I know the go ecosystem lacks a good pkg manager or a standard one
<hoenir> Now this makes sense with the wierd import path hashy thing.
<whyrusleeping> yeap, if you want to go back the normal paths, you can just run `gx-go rewrite --undo`
<whyrusleeping> or to selectively change packages back: `gx-go rewrite --undo package-name`
antiantonym has joined #ipfs
arpu has joined #ipfs
<whyrusleeping> gx also allows you to easily see the entire dependency structure with : `gx deps --tree` (this is probably my favorite command)
<hoenir> That's nice.
<hoenir> This is really nice tho`
<whyrusleeping> :D
<whyrusleeping> I'm really glad to hear that :)
tmg has joined #ipfs
<Mateon1> By the way, what is the status of depviz? Last time I looked, it seemed the development halted.
<Mateon1> Quite a shame, it would be really useful
aedigix has quit [Remote host closed the connection]
<whyrusleeping> Mateon1: agreed...
<whyrusleeping> We're talking about trying to get it moving forward again
<whyrusleeping> but we're already stretched pretty thin right now
aedigix has joined #ipfs
<whyrusleeping> lgierth: you around?
<lgierth> yeah
<whyrusleeping> wanna try out something cool?
<whyrusleeping> (hopefully cool)
<lgierth> sure
<whyrusleeping> 'go get -u github.com/whyrusleeping/gx-go'
<whyrusleeping> 'cd $YOUR_FAVORITE_PROJECT'
<whyrusleeping> 'gx-go devcopy'
<whyrusleeping> it doesnt quite work on go-ipfs yet
<whyrusleeping> because we have a duplicate dependency
<hoenir> Why does this project don't use some error library for error return values ? like for example this https://github.com/pkg/errors
<hoenir> And why the errors messages like are so messy and random ?
<hoenir> and why there are entire blocks of code without a single comment ?
<lgierth> :)
<lgierth> working on it
<lgierth> whyrusleeping: :D ../../../workspace/gopath/src/github.com/libp2p/go-libp2p/p2p/protocol/identify/id.go:98: cannot use ids.Reporter (type "github.com/libp2p/go-libp2p/vendor/github.com/libp2p/go-libp2p-metrics".Reporter) as type "github.com/libp2p/go-libp2p/vendor/gx/ipfs/QmaMSrAXMpMhsrbGZYmGXE4X1ttkFv7KZSpGa5AKYTUpPD/go-libp2p-metrics".Reporter in argument to meterstream.WrapStream:
<lgierth> it looks pretty useful though
<Mateon1> hoenir: Regarding commenting code, I think it should be kept to a minimum, "why do this?" not "what does this do?". Functions should be documented, but I'm not sure if GoDoc works with comments or something else, as I haven't worked enough Golang.
<lemmi> i like the errors package a lot. *espececially* when extern libraries do something stupid. gives you a good pointer where to start looking
<hoenir> I'm asking because I'm interested to contribute to the project in my spare time.
<Kubuxu> re: comments, what comments usually do best it getting out of date as code moves forward
<hoenir> I can start add the errors pkg in use in the go-ips implementation
<hoenir> and if the code is updated comments also will be updated, so what's the problem?
Guest52219 has quit [Quit: Leaving]
<SchrodingersScat> ERROR commands/h: open /home/anon/.ipfs/blocks/K7/put-367889218: too many open files client.go:247
<Kubuxu> SchrodingersScat: see one of recent issues, it is known
<SchrodingersScat> :(
<Kubuxu> hoenir: thing is, they won't. There are multiple examples of that in programming in general, even in our codebases where we use comments sparingly
<hoenir> I think the logic "well I dont write comments because well..I will change anyway in the near future the code so.. why bother..?" it's not justifying anything
<Kubuxu> SchrodingersScat: you can try working around by increasing `ulimit -n 2048` or something
<whyrusleeping> incorrect comments are far worse than no comments
<Kubuxu> lemmi: we are mostly discussing code comments, not doc comments. At least I was.
<whyrusleeping> readable code is paramount
<Kubuxu> Docs comments is something we are lacking a bit but working forward.
<SchrodingersScat> Kubuxu: It seems to make slow progress each iteration, so wrapped it with an 'until' loop.
<Kubuxu> yeah, it will
<whyrusleeping> Yeah, adding doc strings to functions and packages is something we need to do
<lemmi> Kubuxu: ah
<lemmi> probably missed a word somewhere
<Kubuxu> SchrodingersScat: but increasing ulimit to 2048 should resolve it
<hoenir> whyrusleeping: that's why you review PR's? that's why you update the comments also?
espadrine has quit [Ping timeout: 260 seconds]
<SchrodingersScat> Kubuxu: do you really want to make me cry ;_; ?
<Kubuxu> yes
<Kubuxu> :D
<hoenir> I can start refactoring the code little by little if you want...
<hoenir> and add the pkg errors in the error handling system
<Kubuxu> re err lib: in brings non-insignificant error propagation cost by requesting stack dumps
<lemmi> my approach to documentation: do comments on public functions for godoc, for the rest i usually try to use variable names and functionnames to document most thins. also i usually drop a line or too in front of for loops or something
<Kubuxu> and also prevents for direct comparison of errors which is done very frequently
<whyrusleeping> hoenir: If your code is sufficiently long that you need comments in it, that means one of a few things:
<whyrusleeping> 1. The code needs to be broken up into smaller, well scoped functions
<whyrusleeping> 2. The code itself is poorly written and unclear to the reader (non-obvious)
<whyrusleeping> 3. The code is too complicated and needs to be simplified
<lemmi> that ^
<Kubuxu> 4. You are doing black magic and you shouldn't do it, but you have to because it is SO_REUSEPORT
<lemmi> (still likes to see documentation for the public stuff)
<hoenir> Not always... sometimes it's very helpfull to have some comments along side with the code especially when you are a newcomer to the project
<lgierth> ^
<whyrusleeping> hoenir: take a look at the link lemmi posted about documenting go code
<Kubuxu> yes, comments can bring newcomer up to speed faster, but under-mantained comments (misleading, outdated, not longer fully valid) offset this value added negatively
<hoenir> So should I add the errors pkg in the mix ?
<hoenir> So what are your thoughts on this?
<Kubuxu> It isn't decision that can be taken at a whim. We have over 5000 err variables in go-ipfs, over 500 places where new errors are created
<Kubuxu> and it is just go-ipfs
<whyrusleeping> also pulling a full stack trace from the runtime for every error will be expensive
<whyrusleeping> we did this at one point in the past and removed them all since
<hoenir> Or if you don't want the errors should at least be more organized and follow a well defined error message standard format
<whyrusleeping> that i can agree with
<whyrusleeping> all our calls to fmt.Errorf or errors.New should pass a golint
<kevina> whyrusleeping: have a sec, have a look at https://github.com/ipfs/go-ipfs/pull/3712
<kevina> did I do what you wanted?
<whyrusleeping> that looks like what i had in mind
<whyrusleeping> does it still print the double error message thing?
<lemmi> if one wrapped the errors package it could be enabled with a commandline option maybe
<whyrusleeping> i'm setting up a broken repo so i can verify
<kevina> whyrusleeping: yes
<whyrusleeping> :/
<lemmi> otherwise just use the normal errors without stacktrace
* whyrusleeping investigates
<kevina> whyrusleeping: you can just look at the output form the test cases
<whyrusleeping> oh
<whyrusleeping> right
<kevina> "./t0087-repo-robust-gc.sh -v"
* whyrusleeping rm -rf /tmp/ipfs-test
<Kubuxu> lemmi: yeah, but is it then really worth it? It will be disabled 99% of the time, and makes the codebase harder for new users (aka. everyone is using errors.New, fmt.Errorf, Why are you doing it some weird way?)
<lemmi> Kubuxu: i had good experiences with that particular package, but i'm not that much resource limit. also i think it should be easy enough to wrap the selection into a own lib maybe. so it more or less looks the same, but can be switched at any time
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
kvda has joined #ipfs
<lemmi> maybe dave cheney could even be reasoned with and have a switch built in :)
<whyrusleeping> hes generally pretty reasonable
matoro has joined #ipfs
<whyrusleeping> hoenir: if you want, you can pick a given package and make it golint compliant
<whyrusleeping> Thats generally 100% non-controversial ;)
<lemmi> probably not a bad idea to have a look at the tree while also doing something useful :)
robattil1 has joined #ipfs
robattila256 has quit [Ping timeout: 260 seconds]
cxl000 has quit [Ping timeout: 260 seconds]
cxl000 has joined #ipfs
amdi_ has joined #ipfs
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
infinity0 has quit [Ping timeout: 240 seconds]
infinity0_ has joined #ipfs
infinity0_ has quit [Changing host]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
engdesart has joined #ipfs
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
hoenir has quit [Quit: leaving]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
robattil1 has quit [Quit: WeeChat 1.7]
kvda has joined #ipfs
tmg has quit [Ping timeout: 264 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
cxl000 has quit [Ping timeout: 260 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
zabirauf has quit [Remote host closed the connection]
zabirauf has joined #ipfs
gully-foyle has quit [Ping timeout: 246 seconds]
dima-rostov[m] has joined #ipfs
<dima-rostov[m]> Test
dima-rostov[m] has left #ipfs ["User left"]
zabirauf has quit [Ping timeout: 240 seconds]
zabirauf_ has joined #ipfs
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
gully-foyle has joined #ipfs
robattila256 has joined #ipfs
chris613 has left #ipfs [#ipfs]
kvda has joined #ipfs
antiantonym has quit [Ping timeout: 246 seconds]
<zabirauf_> Hi, does the js-ipfs works in browser without connecting to an ipfs node running locally on machine?
matoro has quit [Remote host closed the connection]
matoro has joined #ipfs
matoro has quit [Remote host closed the connection]
antiantonym has joined #ipfs
skeuomorf has quit [Ping timeout: 240 seconds]
stoopkid has joined #ipfs
antiantonym has quit [Ping timeout: 260 seconds]
Foxcool has joined #ipfs
wallacoloo____ has quit [Quit: wallacoloo____]
dignifiedquire has quit [Quit: Connection closed for inactivity]
pawn has joined #ipfs
<pawn> Can I have a URL to orbit please?
<whyrusleeping> orbit.chat
<pawn> one with a content address?
<lgierth> dig +short TXT _dnslink.orbit.chat
<lgierth> eeh, dig +short TXT orbit.chat
antiantonym has joined #ipfs
pawn has quit [Remote host closed the connection]
skeuomorf has joined #ipfs
gaf__ has quit [Quit: SMOS - http://smos-linux.org]
mguentner2 has quit [Quit: WeeChat 1.7]
mguentner has joined #ipfs
wallacoloo____ has joined #ipfs
MrControll has quit [Quit: Leaving]
spacebar_ has joined #ipfs
_whitelogger has joined #ipfs
mguentner2 has joined #ipfs
aguardar has quit [Quit: Connection closed for inactivity]
mguentner has quit [Ping timeout: 240 seconds]
Guest100_ has joined #ipfs
arkimedes has quit [Ping timeout: 260 seconds]
gmcabrita has quit [Quit: Connection closed for inactivity]
bullcomber has quit [Read error: Connection reset by peer]
enfree has joined #ipfs
amdi_ has quit [Quit: Leaving]
caiogondim has quit [Quit: caiogondim]
dimitarvp has quit [Quit: Bye]
<substack> peermaps tilemesh loading directly from ipfs onto the gpu https://gateway.ipfs.io/ipfs/QmdEeJ3nc5JrRfumGb2XCsfui1jXeXs4jyainZy2W3wnL6
kulelu88 has quit [Quit: Leaving]
<Stskeeps> substack: neat
Foxcool has quit [Ping timeout: 256 seconds]
anewuser has joined #ipfs
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<interfect[m]> whyrusleeping: I comment everything I ever write. I always have to write out what I'm doing in comments first, so I know it's actually a well-formed idea
<interfect[m]> The other reason you need comments is that code can be very explicit on the what but provides absolutely no indication as to the why
<interfect[m]> So you need to say things like "move this pointer so we preserve our invariant that blah"
<interfect[m]> Or "We're not allowed to do this the first way that occurs to us because that would violate this other precondition"
<interfect[m]> It's easy to look at code with no comments and think that you understand it and that it's correct, but then if you go through and write out justification for what's happening, you will find that a crucial step is missing, or that some code is superfluous and not required to accomplish what you're doing, or that the whole approach is wrongheaded and can never work.
<interfect[m]> Whereas if you have just code all you're thinking about is the how and the what
caiogondim has joined #ipfs
muvlon has quit [Ping timeout: 256 seconds]
_whitelogger has joined #ipfs
enfree has quit [Ping timeout: 240 seconds]
muvlon has joined #ipfs
spacebar_ has quit [Quit: spacebar_ pressed ESC]
kvda has joined #ipfs
kvda has quit [Client Quit]
robattila256 has quit [Ping timeout: 268 seconds]
_whitelogger has joined #ipfs
chungy_ has joined #ipfs
chungy has quit [Ping timeout: 256 seconds]
athan has quit [Remote host closed the connection]
rendar has joined #ipfs
chungy_ is now known as chungy
engdesart has quit [Quit: engdesart]
tmg has joined #ipfs
_whitelogger has joined #ipfs
kevina has quit [Read error: Connection reset by peer]
zabirauf_ has quit [Ping timeout: 268 seconds]
tmg has quit [Ping timeout: 260 seconds]
antiantonym has quit [Ping timeout: 240 seconds]
cxl000 has joined #ipfs
antiantonym has joined #ipfs
kevina has joined #ipfs
Caterpillar has joined #ipfs
reit has quit [Ping timeout: 240 seconds]
gaf_ has joined #ipfs
dignifiedquire has joined #ipfs
Encrypt has joined #ipfs
Encrypt has quit [Client Quit]
maxlath has joined #ipfs
wallacoloo____ has quit [Quit: wallacoloo____]
Oatmeal has quit [Read error: Connection reset by peer]
Oatmeal has joined #ipfs
<horrified> what's wrong with archives.ipfs.io? barely any links manage to load
nick[m] has joined #ipfs
stoopkid has quit [Quit: Connection closed for inactivity]
Ronsor has quit [Ping timeout: 260 seconds]
ZaZ has joined #ipfs
maxlath has quit [Ping timeout: 264 seconds]
tmg has joined #ipfs
caiogondim has quit [Quit: caiogondim]
cxl000 has quit [Ping timeout: 256 seconds]
cxl000 has joined #ipfs
gmcabrita has joined #ipfs
Encrypt has joined #ipfs
warbo` has joined #ipfs
warbo`` has joined #ipfs
warbo has quit [Ping timeout: 258 seconds]
warbo` has quit [Ping timeout: 268 seconds]
__uguu__ has joined #ipfs
__uguu__ has quit [Client Quit]
__uguu__ has joined #ipfs
__uguu__ has joined #ipfs
__uguu__ has quit [Changing host]
__uguu__ has left #ipfs [#ipfs]
dignifiedquire has quit [Quit: Connection closed for inactivity]
dignifiedquire has joined #ipfs
warbo``` has joined #ipfs
warbo``` has left #ipfs [#ipfs]
warbo`` has quit [Ping timeout: 268 seconds]
Monokles has quit [Ping timeout: 260 seconds]
Encrypt has quit [Quit: Quit]
melvster has joined #ipfs
<melvster> can someone explain how the # character is used in ipfs, e.g. to pull in an app?
skeuomorf has quit [Ping timeout: 240 seconds]
Monokles has joined #ipfs
<deltab> can you give an example?
<melvster> deltab, I was watching the ipfs video and it showed a movie being played but it also pulled in the movie player, by using the # character and downloading the player itself from ipfs ...
Foxcool has joined #ipfs
Monokles has quit [Ping timeout: 258 seconds]
<deltab> the javascript on the player page uses window.location.hash.substring(1)
<deltab> that gives it the path to the video to be played
<melvster> deltab, ah ha, I see, very smart
<melvster> so it pulls it in dynamically using web technology, nothing specific to ipfs
<melvster> great feature
<deltab> right
<deltab> that and similar are at the end of https://ipfs.io/docs/examples/
<melvster> ty!
Monokles has joined #ipfs
<melvster> would there be any value in building a rating / ranking system for ipfs content?
<deltab> sure
<melvster> I could work on this
<melvster> my other project is a rating system for the web
<melvster> let me see if i can create a prototype / demo
_whitelogger has joined #ipfs
daurnimator has joined #ipfs
<daurnimator> Hi
<daurnimator> I was just reading through multiformats
<daurnimator> I was wondering if there is registry of ciphers you could recommend
Ronsor has joined #ipfs
Ronsor is now known as Guest57795
<melvster> but looks like that might not be yet aligned with multihash
<daurnimator> That's a link to "Hash Algorithms"
<daurnimator> "Encryption Algorithms" further down is ciphers; but it's not a very large list
<melvster> oh you meant ciphers for encryption?
gpestana has quit [Quit: Connection closed for inactivity]
<melvster> yeah i know, IANA isnt that well updated, but its probably the top level for such registries of that kind ... im kind of new here, so was just passing on what I know ... multi hash probably has its own list, but would be nice to get everything in sync :)
<daurnimator> I would at least expect to see AES-{128,192,256}-{CBC,OCB,GCM} and XSalsa20 on such a list
<melvster> true
<melvster> is just a case of people volunteering to send in updates
<daurnimator> updates to what?
<melvster> the list
<melvster> anyway i probably didnt answer your question, apologies for the distraction ...
<daurnimator> the EAP-POTP list?
<daurnimator> I'm not sure my requirements are the same as theirs...
stoopkid has joined #ipfs
<daurnimator> the ipsec list is slightly better... but still incomplete: https://www.iana.org/assignments/ipsec-registry/ipsec-registry.xhtml#ipsec-registry-4
<daurnimator> and it wouldn't be the right place either: e.g. due to how ipsec works, a CFB cipher wouldn't make any sense: you'd mainly want CBC
dimitarvp has joined #ipfs
maxlath has joined #ipfs
<SchrodingersScat> Kubuxu: 07:48:11.750 ERROR commands/h: open /home/anon/.ipfs/blocks/Y6/put-579985381: too many open files client.go:247 https://www.youtube.com/watch?v=2nXGPZaTKik The hilarious part was this was part of an old script before IPFS was crippled and so it deletes the files after attempting to insert. I wasn't testing for failure, oops.
<Kubuxu> :(
<Kubuxu> It is funny, as it broke because we fixed DHT
<Kubuxu> other way to work around is to run `ipfs daemon --offline` for the add
<SchrodingersScat> it is funny, my ":|" face as I see my script proudly announce that it failed not even 1/3 way in, and then removed the file
<SchrodingersScat> daemon can be offline for an add?
<SchrodingersScat> oh, i see, that's offline mode so no dht to clog my files open?
<Kubuxu> yeah
<Kubuxu> or `ipfs daemon --routing=none`
<SchrodingersScat> I still prefer in bash: until ipfs add -w "$file" ; do sleep .5 ; done
<melvster> I got the 'too many open files' error, but it didnt delete the file (phew!) :)
<SchrodingersScat> it'll get there eventually, maybe not today, maybe not next week, but eventually
<SchrodingersScat> melvster: yeah, I'm dumb, it tests for completion now.
jkilpatr has joined #ipfs
<melvster> So I think I have made a linkage between the IPFS and HTTP meta data :
<melvster> <ni:///multihash;QmZvTvRQ2voimuYwBtKsyMqMqirDt5Xrq4sdow2RM5ynKj>
<melvster> <ipfs:/ipfs/QmZvTvRQ2voimuYwBtKsyMqMqirDt5Xrq4sdow2RM5ynKj> ;
<melvster> "audio/mpeg" ;
<melvster> "NodeUp: A Node.js Podcast - Episode 114 - Internationalization Deep Dive" .
<melvster> (apologies for flooding)
<melvster> I wonder if this would be useful ...
<melvster> it gives the title, contentType and webseeds of a given ipfs hash
<melvster> I could also add keywords and tags to enable searching
maxlath has quit [Ping timeout: 260 seconds]
<melvster> That would then enable rating / ranking different ipfs content, which may be useful for finding content
<melvster> And maybe also help sites like archive.org save bandwidth ...
ZaZ has quit [Quit: Leaving]
espadrine has joined #ipfs
<melvster> i just wrote to archive.org to see if they'd like to implement this ... :)
hoenir has joined #ipfs
<fil_redpill> Cool
robattila256 has joined #ipfs
maxlath has joined #ipfs
chris613 has joined #ipfs
monkhood has quit [Ping timeout: 240 seconds]
spacebar_ has joined #ipfs
Encrypt has joined #ipfs
tmg has quit [Ping timeout: 256 seconds]
chungy has quit [Ping timeout: 246 seconds]
maxlath has quit [Ping timeout: 260 seconds]
chungy has joined #ipfs
chungy has quit [Ping timeout: 264 seconds]
chungy has joined #ipfs
Encrypt has quit [Quit: Quit]
anewuser has quit [Ping timeout: 240 seconds]
<lidel> Are both Chrome extensions broken with v0.4.6 ? https://github.com/lidel/ipfs-firefox-addon/issues/218#issue-215258331
<hoenir> why don't we use in unit testing the "gopkg.in/check.v1" ?
arpu has quit [Quit: Ex-Chat]
arpu has joined #ipfs
ioth1nkn0t has joined #ipfs
<ioth1nkn0t> moin
<hoenir> ?
<ioth1nkn0t> as in 'Hi'
chris613 has left #ipfs [#ipfs]
<ioth1nkn0t> awesome project guys
<ioth1nkn0t> If I set this up locally, and I'm behind a router it requires portforwarding right? or is there a constant reverse tcp connection?
shizy has joined #ipfs
<lidel> ioth1nkn0t, i think it should just work, NAT traversal was implemented last year https://github.com/ipfs/go-ipfs/issues/57#issuecomment-168350040
<ioth1nkn0t> aah cool, that's awesome and kinda scary at the same time
<ioth1nkn0t> thanks!
<lidel> :)
zabirauf has joined #ipfs
<lidel> whyrusleeping, is there an up-to-date description of current NAT traversal solution?
shizy has quit [Ping timeout: 258 seconds]
hoenir has quit [Quit: leaving]
amdi_ has joined #ipfs
chungy has quit [Read error: Connection timed out]
chungy has joined #ipfs
nick[m] is now known as nick2000
spacebar_ has quit [Quit: spacebar_ pressed ESC]
stoopkid has quit [Quit: Connection closed for inactivity]
anewuser has joined #ipfs
spacebar_ has joined #ipfs
ioth1nkn0t has quit [Quit: leaving]
arpu has quit [Ping timeout: 246 seconds]
Foxcool has quit [Ping timeout: 256 seconds]
caiogondim has joined #ipfs
hoenir has joined #ipfs
<lidel> melvster, I hinted at using .well-known some time ago, but it is “not intended for general information retrieval or establishment of large URI namespaces” https://tools.ietf.org/html/rfc5785#page-3
<lidel> and nobody revisited it since then :(
<lidel> melvster, there is already established standard for providing multiple alternative sources for the same resource called "Metalink" https://en.wikipedia.org/wiki/Metalink
<melvster> lidel, thanks will look ...
<lidel> I remember people did not like it was XML based (https://tools.ietf.org/html/rfc5854) but it seems it is now possible to put the same info in HTTP headers instead: https://tools.ietf.org/html/rfc6249
<melvster> sure
<melvster> hmm why xml I wonder ...
<melvster> oh because of dsig, that never really caught on ...
<melvster> i think you can just do rel="describedBy" then in that described by doc you can have sameAs terms ...
<melvster> good food for thought tho, thanks for passing it on!
<melvster> The "metalink:url" element contains a file IRI. Most metalink:file
<melvster> container elements will contain multiple metalink:url elements, and
<melvster> each one SHOULD be a valid alternative to download the same file.
<melvster> that's nice
<melvster> a little bit like BEP 19 : http://www.bittorrent.org/beps/bep_0019.html
<melvster> which webtorrent uses
<melvster> sameAs I think would work equally well here, and can also go into link headers, but modifying link headers can be a challenge ...
jedahan has joined #ipfs
<melvster> I think at this point it would be easier to look in a well known location than to modify link headers. Though longer term link headers are probably the way to go ...
<Guest57795> WebTorrent
<melvster> what I think I'll do is rel="describedBy" and then point to a .well-known meta file for that hash
kthnnlg has joined #ipfs
Mateon1 has quit [Ping timeout: 260 seconds]
Mateon1 has joined #ipfs
<hoenir> can anyone explain why we don't use a well known testing api ? like go gocheck?
<hoenir> ?
amdi_ has quit [Remote host closed the connection]
<hoenir> I could start write tests using the gocheck api which will improve a lot the unit tests.
<hoenir> Anyone any oppinions about this?
arpu has joined #ipfs
<lidel> I've added a note about metalink in headers at: https://github.com/ipfs/notes/issues/104#issuecomment-287631968
<lidel> it only mentions rel=duplicate, but multiple Links with rel=describedby could be used at the same time
<hoenir> exit
hoenir has quit [Quit: leaving]
<melvster> very nice
<melvster> i can also put that rel="duplicate" into the body of the meta data ... tho there's an element of syntactic sugar there, I think it would work
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
chungy has quit [Ping timeout: 260 seconds]
jedahan has joined #ipfs
chungy has joined #ipfs
maxlath has joined #ipfs
arkimedes has joined #ipfs
subtraktion has joined #ipfs
Akaibu has joined #ipfs
<melvster> just found: https://ipld.io/
tpae has joined #ipfs
<tpae> hello, i have a quick question with js-ipfs
maxlath has quit [Ping timeout: 240 seconds]
shizy has joined #ipfs
<tpae> do i need to be running my own ipfs locally before using js-ipfs on my browser?
<dryajov> tpae: not really, you can run a full node in the browser
antiantonym has quit [Ping timeout: 258 seconds]
arpu has quit [Ping timeout: 260 seconds]
jedahan has quit [Quit: Textual IRC Client: www.textualapp.com]
<deltab> melvster: there's also http://multiformats.io/ and there'll be one for libp2p too
<melvster> very cool
arpu has joined #ipfs
shizy has quit [Quit: WeeChat 1.7]
Encrypt has joined #ipfs
espadrine has quit [Ping timeout: 264 seconds]
<melvster> jbenet, nice write up on Linked Data, I replied here : https://github.com/ipfs/ipfs/issues/36#issuecomment-287637589
Encrypt has quit [Quit: Quit]
<SchrodingersScat> why am I downloading 4.28MB for a repo add again?
<SchrodingersScat> per second
_fil_ has joined #ipfs
kulelu88 has joined #ipfs
rendar has quit [Ping timeout: 268 seconds]
antiantonym has joined #ipfs
spacebar_ has quit [Quit: spacebar_ pressed ESC]
rendar has joined #ipfs
wkennington has joined #ipfs
muvlon has quit [Ping timeout: 256 seconds]
espadrine has joined #ipfs
spacebar_ has joined #ipfs
fleeky has quit [Remote host closed the connection]
muvlon has joined #ipfs
onabreak has quit [Ping timeout: 260 seconds]
Gruu has quit [Ping timeout: 260 seconds]
Encrypt has joined #ipfs
spacebar_ has quit [Quit: spacebar_ pressed ESC]
[0__0] has quit [Ping timeout: 245 seconds]
wallacoloo____ has joined #ipfs
[0__0] has joined #ipfs
zabirauf has quit [Remote host closed the connection]
zabirauf has joined #ipfs
zabirauf has quit [Read error: Connection reset by peer]
zabirauf_ has joined #ipfs
zabirauf_ has quit [Remote host closed the connection]
zabirauf has joined #ipfs
<melvster> if i do ipfs add and dont select pin it will eventually get garbage collected, right?
<melvster> e.g. when i reboot?
<whyrusleeping> melvster: it will get garbage collected when you run `ipfs repo gc`
<melvster> whyrusleeping, and only then?
<whyrusleeping> melvster: but `ipfs add` pins by default
<whyrusleeping> yes
<melvster> oh
<melvster> so if i do ipfs add, then ipfs repo gc, it's still pinned?
<whyrusleeping> yeap
fleeky has joined #ipfs
<melvster> how do i add without pinning?
<whyrusleeping> if you do `ipfs add --pin=false something` it wont pin
<melvster> ahhh
<melvster> thanks
<whyrusleeping> SchrodingersScat: hrm... probably dht traffic
* whyrusleeping adds another tally to "reasons to fix content providing"
<whyrusleeping> SchrodingersScat: try running your daemon with --routing=dhtclient and see if it does the same
<melvster> and is it possible to 'unpin' something?
qballer has joined #ipfs
<chpio[m]> ipfs pin rm $hash?
<melvster> ty!
Jesin has quit [Quit: Leaving]
<whyrusleeping> If anyone likes thinking deeply about distributed systems, i'd love some more feedback here: https://github.com/ipfs/notes/issues/162
Jesin has joined #ipfs
<melvster> if i have a list of hashes on my machine, is it public to everyone, or only if you know the exact hash?
<whyrusleeping> melvster: have you added the list of hashes to ipfs?
hoenir has joined #ipfs
<melvster> lets say, yes
<whyrusleeping> then i could see a dht provider record for that object, fetch it from you, and have the list
<whyrusleeping> but i would have to be crawling the dht
<melvster> ok, thanks
<Mateon1> whyrusleeping: I saw that a while ago, IMO: Delegated routing - no, Trackers - ??? (need to think more, Trackers are not enough by themselves, but might be nice to lower strain on DHT), Hybrid: I like that idea! Might be tricky to implement, might need a reputation system to see which nodes are useful & accurate and which ones are not (or malicious).
<whyrusleeping> Yeah.... thats basically my thoughts on the matter at this point too
<whyrusleeping> i like the hybrid idea, but its going to be hard
<whyrusleeping> i also like the idea of sharding the dht
<whyrusleeping> but thats also hard
krzysiekj has quit [Quit: WeeChat 1.5]
<Mateon1> I don't think sharding a DHT is feasible
<whyrusleeping> it would be sharding the keyspace
<whyrusleeping> basically, changing the behaviour of the 'Get closest peers' functionality
mguentner2 is now known as mguentner
<Mateon1> So, does that mean having N separate DHTs, each containing only keys ==X(mod N)?
<Mateon1> I don't see how that helps
<whyrusleeping> It helps because it makes the number of rpcs required to put large amounts of records out much smaller
<whyrusleeping> i can batch provider calls together
<melvster> need to give everyone who helps the network crypto coins =)
<whyrusleeping> yeah, incentivized dhts are an idea we've been wanting for a while
<Mateon1> Right... Can't you batch records without sharding the DHT though?
<whyrusleeping> Mateon1: its much harder
<whyrusleeping> because each record goes to a very specific subset of peers
krzysiekj has joined #ipfs
<whyrusleeping> at scale, no two records go to the same subset
<lemmi> syncthing used relaying for a while and it contributed quite a lot to the network overall. also people don't seem to have a problem with running relay nodes. i even run one myself. it's more or less an intermediate solution, it does not the quite the same thing you would need, but maybe it's a tool to get get some relief for some time.
maxlath has joined #ipfs
<Mateon1> lemmi: No.. that just helps with bypassing NATs
<whyrusleeping> yeah, relays just help with nats
<whyrusleeping> cjdns switched to supernodes
<whyrusleeping> (aka, delegated routing)
<Mateon1> whyrusleeping: Has it already? Is that a new protocol version?
<whyrusleeping> Mateon1: yeah, fairly certain that happened already
<whyrusleeping> lgierth: Kubuxu right?
* Mateon1 checks cjdns repo
<Kubuxu> whyrusleeping: not really
<Kubuxu> dht is still the default
<Mateon1> Ah, yep. v19
<Kubuxu> but supernode routing is being worked on
<Mateon1> I'm still on v18
<Kubuxu> and can work in parallel with DHT
<Mateon1> Actually, I'm not on any version since I reinstalled..
<Kubuxu> you delegate DHT queries to your supernode
<Mateon1> Are supernodes manual, or automatically discovered?
Encrypt has quit [Quit: Quit]
<Kubuxu> manual
<Kubuxu> as you have to partially thrust your supernode
subtraktion has quit [Ping timeout: 256 seconds]
wallacoloo____ has left #ipfs ["Good Bye"]
<melvster> oops disk filled up ..
MrControll has joined #ipfs
<nicolagreco> hey IPFS, is there a way for me to connect and send files to a peerId that I know of?
<nicolagreco> even the other way around is fine: like pinning to an ipns and being always pulling
<nicolagreco> I can just make a little cron that keeps on polling, that is fine too
<hoenir> whyrusleeping: could you please tell me how can I add into go-ipfs a dependencie?
<hoenir> using gx of corse
<whyrusleeping> hoenir: gx init and publish the new dependency, then gx import it in go-ipfs
<hoenir> where do I hit the gx init ? in github.com/ipfs/go-ipfs/ ?
<whyrusleeping> in your dependency
<whyrusleeping> its like npm, a package needs to be declared as a package (so it can specify its own deps)
<whyrusleeping> You can also just use normal dependencies (if you just want to test things out)
<hoenir> I want to be sure if I give my branch to someone, he/she can hit the normal stuff like make install and install the project + my dependencie that I've added.
<Mateon1> nicolagreco: Can't push files without something like ipfs-cluster
<Mateon1> You can pull with repeated `ipfs pin /ipns/Qmwhatever`
<Mateon1> pin add*
<hoenir> I think I resolved, thanks whyrusleeping
infinity0 has quit [Remote host closed the connection]
<whyrusleeping> hoenir: Yeah, you can pin the dependency on http://mars.i.ipfs.team:9444/
<whyrusleeping> Or you can ask for its hash to be pinned here by pinbot
Guest57795 is now known as ronsor
ronsor has quit [Changing host]
ronsor has joined #ipfs
infinity0 has joined #ipfs
<hoenir> whyrusleeping: I don't own the dependencie.
<hoenir> to be more specific this is gopkg.in/check.v1
<Kubuxu> tell me what it is, I will fork it in gxed org (org where we keep gxified packages).
<Kubuxu> or keep your own fork
<Kubuxu> it doesn't matter much
<Kubuxu> just be warned, we have explicitly not used testing lib
<hoenir> I know
<AphelionZ> what's up, y'all? What's the keygen lib I should be using?
<hoenir> When I saw the code I was disappointed
<AphelionZ> for javascript browser-based keygen
<Kubuxu> AphelionZ: js-libp2p-crypto iirc
<AphelionZ> Kubuxu: thanks!
<Kubuxu> hoenir: re test lib: https://github.com/ipfs/go-ipfs/issues/3498 if someone did throughout analysis, compared pros and cons, showed how example existing tests can be improved (especially the more complex ones) I think we could settle on some lib
cxl000 has quit [Quit: Leaving]
<Kubuxu> not sure though
<hoenir> Kubuxu: thanks for the link I will send my first PR soon
<Kubuxu> right off the bat, no guarantees that it will be merged
<hoenir> no worries
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<Kubuxu> substack: NICE
<Kubuxu> that soldering iron is horrific :P
<Kubuxu> as EE I am spoiled by soldering stations
wkennington has quit [Ping timeout: 256 seconds]
<Kubuxu> hoenir: 1. please split that PR into separate parts 2. freebsd is special the argument is int64 instead of uint64
<Kubuxu> which breaks stuff
wkennington has joined #ipfs
tmg has joined #ipfs
<Kubuxu> that is why it got separate build file
<hoenir> Kubuxu: a sec
<Kubuxu> s/parts/PRs
<Kubuxu> put tests in one, general cleanup in the other
<Kubuxu> you don't want to lock up part of it because other unrelated part goes slow
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
zabirauf has quit [Remote host closed the connection]
<hoenir> rlim_t
<hoenir> Unsigned integer type used for limit values.
zabirauf has joined #ipfs
<hoenir> also could you please pin point me on the bsd stuff?
<hoenir> I could not find anything related to what you said about bad, for my point of view the syscall are identical
zabirauf has quit [Ping timeout: 268 seconds]
zabirauf has joined #ipfs
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<hoenir> yeah you are right I will fix this.
qballer has quit [Ping timeout: 260 seconds]
zabirauf has quit [Ping timeout: 240 seconds]
<hoenir> This is another reason why code should be documented...
<hoenir> My bad.
athan has joined #ipfs
engdesart has joined #ipfs
<melvster> does js-libp2p work?
<Kubuxu> hoenir: in this case, agree it should have been pointed out in comment, but also checking git blame would give you the reason for it.
<Kubuxu> also your PR comment just did what many code comments do
<Kubuxu> it got out of date
<hoenir> ?
<hoenir> it got out of date
<hoenir> ?
<hoenir> file, line number.
palkeo has quit [Quit: Konversation terminated!]
<Kubuxu> sorry
<Kubuxu> I thought you skipped the rlimit thingy completly
skeuomorf has joined #ipfs
spacebar_ has joined #ipfs
<Kubuxu> mind changing the Rlimit thingy a bit, extract the rlimit setting and getting function (so they are only platform dependent) and keeping the checking logic in one place?
<hoenir> Kubuxu: please, comment exactly in the PR
<Kubuxu> k
<hoenir> Feel free to add comments, review..
tpae has quit [Ping timeout: 260 seconds]
<Kubuxu> hoenir: do you have your daemon running?
<hoenir> a sec.
<hoenir> Daemon is ready
<hoenir> Initializing daemon...
<hoenir> Swarm listening on /ip4/79.112.254.61/tcp/4001
<hoenir> Swarm listening on /ip4/192.168.1.103/tcp/4001
<hoenir> Swarm listening on /ip4/192.168.1.101/tcp/4001
<hoenir> Swarm listening on /ip4/127.0.0.1/tcp/4001
<hoenir> Swarm listening on /ip6/::1/tcp/4001
tilgovi has joined #ipfs
<hoenir> API server listening on /ip4/127.0.0.1/tcp/5001
<hoenir> Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080
<Kubuxu> got it thanks
<hoenir> got what?
<hoenir> the dependency ?
<Kubuxu> !pin QmPp8w6dCzx6vPNre8QVSbN7GCY9M7ornWesnrJg48vYxE hoenir-"go-check
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmPp8w6dCzx6vPNre8QVSbN7GCY9M7ornWesnrJg48vYxE
<Kubuxu> !unpin QmPp8w6dCzx6vPNre8QVSbN7GCY9M7ornWesnrJg48vYxE
<pinbot> now unpinning on 8 nodes
<pinbot> unpinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmPp8w6dCzx6vPNre8QVSbN7GCY9M7ornWesnrJg48vYxE
<Kubuxu> !pin QmPp8w6dCzx6vPNre8QVSbN7GCY9M7ornWesnrJg48vYxE hoenir-go-check
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmPp8w6dCzx6vPNre8QVSbN7GCY9M7ornWesnrJg48vYxE
<Kubuxu> hoenir: yes
<hoenir> ow so now I should run gx-go rewrite inside golang.org/ipfs/go-ipfs/ ?
<Kubuxu> yes
<Kubuxu> it should change to gocheck imports to something like `gx/ipfs/Qm....`
<hoenir> $ gx-go rewrite
<hoenir> package "check.v1" not found. (dependency of go-ipfs)
<hoenir> ERROR: Error: build of rewrite mapping failed:
skeuomorf has quit [Ping timeout: 258 seconds]
<Kubuxu> gx install
<hoenir> I should download the file from the ipfs.io/ipfs?
<hoenir> `gx install` done
<hoenir> now ?
<hoenir> retry the cmd ?
<Kubuxu> yup
tilgovi has quit [Ping timeout: 246 seconds]
kulelu88 has quit [Ping timeout: 240 seconds]
kulelu88 has joined #ipfs
<Kubuxu> good
<Kubuxu> those are two modes of gx vendoring
<Kubuxu> you can have code in your codebase in un-rewritten or rewritten state, un-rewritten works with go get but doesn't provide 100% pinning
<Kubuxu> rewritten doesn't work with go get but is 100% pinned
<hoenir> hmm
zabirauf has joined #ipfs
maxlath has quit [Quit: maxlath]
<Kubuxu> sharness test code expect this message to have dot at the end :p
<Kubuxu> you can either change the test or the print
<hoenir> so let me say what I've understand so far
<hoenir> you have test/sharness pkg to test integration tests as little scripts to verify that the hole ecosystem is still working?
<Kubuxu> yes, it is system that is used by git for it integration testing
zabirauf has quit [Ping timeout: 264 seconds]
<hoenir> didn't know about that
<hoenir> so how can I resolve it ?
<Kubuxu> you can either change the test or the print in go
<dryajov> Git is introducing other hash algos support - https://github.com/git/git/commit/e1fae930193b3e8ff02cee936605625f63e1d1e4
<dryajov> Good time to suggest multihash?
<hoenir> grep "raised file descriptor limit to 1024." actual_daemon > /dev/null
<Kubuxu> dryajov: a bit late, also they have quite a complicated compatibility scheme
<Kubuxu> hoenir: yeah, your print doesn't print the `.` anymore
<hoenir> omg
<Kubuxu> yeah, I suggest removing the dot from the test
<Kubuxu> it is silly
<dryajov> Kubuxu: true on complexity... Wonder if they are at least aware of multihash tho
<Kubuxu> they are
anewuser has quit [Quit: anewuser]
<hoenir> so why is it now keep failing.
<hoenir> I'm lost
arkimedes has quit [Ping timeout: 246 seconds]
<Kubuxu> hoenir: I am off for tonight, will check it tomorrow and reply in the issue.
<hoenir> ok
hoenir has quit [Quit: leaving]