<hoenir>
why use this path instead of the classic idiomatic go import path like {vcs}/{pkg}/{sub-package}
<hoenir>
?
<hoenir>
ok
<hoenir>
whyrusleeping: I will take a look now :)
<hoenir>
thanks for the fast link
<whyrusleeping>
hoenir: because then upstream dependencies change and break ipfs
<whyrusleeping>
gx ensures that every time you download and build ipfs youre getting exactly what you expect
<whyrusleeping>
no random github repos breaking things
<whyrusleeping>
no accidentally pushing a breaking change as a patch update in semver
<whyrusleeping>
*exactly* the code we tested and published
<hoenir>
ahh and basically you are lock down on a verion of the dependencies in order to not brake ipfs also. Which is a good reason.
<whyrusleeping>
yeap!
<hoenir>
never heard of the gx pkg manager
<whyrusleeping>
i wrote it after getting really really frustrated with all the existing go packaging tools
<hoenir>
yeah I know the go ecosystem lacks a good pkg manager or a standard one
<hoenir>
Now this makes sense with the wierd import path hashy thing.
<whyrusleeping>
yeap, if you want to go back the normal paths, you can just run `gx-go rewrite --undo`
<whyrusleeping>
or to selectively change packages back: `gx-go rewrite --undo package-name`
antiantonym has joined #ipfs
arpu has joined #ipfs
<whyrusleeping>
gx also allows you to easily see the entire dependency structure with : `gx deps --tree` (this is probably my favorite command)
<hoenir>
That's nice.
<hoenir>
This is really nice tho`
<whyrusleeping>
:D
<whyrusleeping>
I'm really glad to hear that :)
tmg has joined #ipfs
<Mateon1>
By the way, what is the status of depviz? Last time I looked, it seemed the development halted.
<Mateon1>
Quite a shame, it would be really useful
aedigix has quit [Remote host closed the connection]
<whyrusleeping>
Mateon1: agreed...
<whyrusleeping>
We're talking about trying to get it moving forward again
<whyrusleeping>
but we're already stretched pretty thin right now
aedigix has joined #ipfs
<whyrusleeping>
lgierth: you around?
<lgierth>
yeah
<whyrusleeping>
wanna try out something cool?
<whyrusleeping>
(hopefully cool)
<lgierth>
sure
<whyrusleeping>
'go get -u github.com/whyrusleeping/gx-go'
<whyrusleeping>
'cd $YOUR_FAVORITE_PROJECT'
<whyrusleeping>
'gx-go devcopy'
<whyrusleeping>
it doesnt quite work on go-ipfs yet
<whyrusleeping>
because we have a duplicate dependency
<hoenir>
Why does this project don't use some error library for error return values ? like for example this https://github.com/pkg/errors
<hoenir>
And why the errors messages like are so messy and random ?
<hoenir>
and why there are entire blocks of code without a single comment ?
<lgierth>
:)
<lgierth>
working on it
<lgierth>
whyrusleeping: :D ../../../workspace/gopath/src/github.com/libp2p/go-libp2p/p2p/protocol/identify/id.go:98: cannot use ids.Reporter (type "github.com/libp2p/go-libp2p/vendor/github.com/libp2p/go-libp2p-metrics".Reporter) as type "github.com/libp2p/go-libp2p/vendor/gx/ipfs/QmaMSrAXMpMhsrbGZYmGXE4X1ttkFv7KZSpGa5AKYTUpPD/go-libp2p-metrics".Reporter in argument to meterstream.WrapStream:
<lgierth>
it looks pretty useful though
<Mateon1>
hoenir: Regarding commenting code, I think it should be kept to a minimum, "why do this?" not "what does this do?". Functions should be documented, but I'm not sure if GoDoc works with comments or something else, as I haven't worked enough Golang.
<lemmi>
i like the errors package a lot. *espececially* when extern libraries do something stupid. gives you a good pointer where to start looking
<hoenir>
I'm asking because I'm interested to contribute to the project in my spare time.
<Kubuxu>
re: comments, what comments usually do best it getting out of date as code moves forward
<hoenir>
I can start add the errors pkg in use in the go-ips implementation
<hoenir>
and if the code is updated comments also will be updated, so what's the problem?
Guest52219 has quit [Quit: Leaving]
<SchrodingersScat>
ERROR commands/h: open /home/anon/.ipfs/blocks/K7/put-367889218: too many open files client.go:247
<Kubuxu>
SchrodingersScat: see one of recent issues, it is known
<SchrodingersScat>
:(
<Kubuxu>
hoenir: thing is, they won't. There are multiple examples of that in programming in general, even in our codebases where we use comments sparingly
<hoenir>
I think the logic "well I dont write comments because well..I will change anyway in the near future the code so.. why bother..?" it's not justifying anything
<Kubuxu>
SchrodingersScat: you can try working around by increasing `ulimit -n 2048` or something
<whyrusleeping>
incorrect comments are far worse than no comments
<Kubuxu>
lemmi: we are mostly discussing code comments, not doc comments. At least I was.
<whyrusleeping>
readable code is paramount
<Kubuxu>
Docs comments is something we are lacking a bit but working forward.
<SchrodingersScat>
Kubuxu: It seems to make slow progress each iteration, so wrapped it with an 'until' loop.
<Kubuxu>
yeah, it will
<whyrusleeping>
Yeah, adding doc strings to functions and packages is something we need to do
<lemmi>
Kubuxu: ah
<lemmi>
probably missed a word somewhere
<Kubuxu>
SchrodingersScat: but increasing ulimit to 2048 should resolve it
<hoenir>
whyrusleeping: that's why you review PR's? that's why you update the comments also?
espadrine has quit [Ping timeout: 260 seconds]
<SchrodingersScat>
Kubuxu: do you really want to make me cry ;_; ?
<Kubuxu>
yes
<Kubuxu>
:D
<hoenir>
I can start refactoring the code little by little if you want...
<hoenir>
and add the pkg errors in the error handling system
<Kubuxu>
re err lib: in brings non-insignificant error propagation cost by requesting stack dumps
<lemmi>
my approach to documentation: do comments on public functions for godoc, for the rest i usually try to use variable names and functionnames to document most thins. also i usually drop a line or too in front of for loops or something
<Kubuxu>
and also prevents for direct comparison of errors which is done very frequently
<whyrusleeping>
hoenir: If your code is sufficiently long that you need comments in it, that means one of a few things:
<whyrusleeping>
1. The code needs to be broken up into smaller, well scoped functions
<whyrusleeping>
2. The code itself is poorly written and unclear to the reader (non-obvious)
<whyrusleeping>
3. The code is too complicated and needs to be simplified
<lemmi>
that ^
<Kubuxu>
4. You are doing black magic and you shouldn't do it, but you have to because it is SO_REUSEPORT
<lemmi>
(still likes to see documentation for the public stuff)
<hoenir>
Not always... sometimes it's very helpfull to have some comments along side with the code especially when you are a newcomer to the project
<lgierth>
^
<whyrusleeping>
hoenir: take a look at the link lemmi posted about documenting go code
<Kubuxu>
yes, comments can bring newcomer up to speed faster, but under-mantained comments (misleading, outdated, not longer fully valid) offset this value added negatively
<hoenir>
So should I add the errors pkg in the mix ?
<hoenir>
So what are your thoughts on this?
<Kubuxu>
It isn't decision that can be taken at a whim. We have over 5000 err variables in go-ipfs, over 500 places where new errors are created
<Kubuxu>
and it is just go-ipfs
<whyrusleeping>
also pulling a full stack trace from the runtime for every error will be expensive
<whyrusleeping>
we did this at one point in the past and removed them all since
<hoenir>
Or if you don't want the errors should at least be more organized and follow a well defined error message standard format
<whyrusleeping>
that i can agree with
<whyrusleeping>
all our calls to fmt.Errorf or errors.New should pass a golint
<whyrusleeping>
that looks like what i had in mind
<whyrusleeping>
does it still print the double error message thing?
<lemmi>
if one wrapped the errors package it could be enabled with a commandline option maybe
<whyrusleeping>
i'm setting up a broken repo so i can verify
<kevina>
whyrusleeping: yes
<whyrusleeping>
:/
<lemmi>
otherwise just use the normal errors without stacktrace
* whyrusleeping
investigates
<kevina>
whyrusleeping: you can just look at the output form the test cases
<whyrusleeping>
oh
<whyrusleeping>
right
<kevina>
"./t0087-repo-robust-gc.sh -v"
* whyrusleeping
rm -rf /tmp/ipfs-test
<Kubuxu>
lemmi: yeah, but is it then really worth it? It will be disabled 99% of the time, and makes the codebase harder for new users (aka. everyone is using errors.New, fmt.Errorf, Why are you doing it some weird way?)
<lemmi>
Kubuxu: i had good experiences with that particular package, but i'm not that much resource limit. also i think it should be easy enough to wrap the selection into a own lib maybe. so it more or less looks the same, but can be switched at any time
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
kvda has joined #ipfs
<lemmi>
maybe dave cheney could even be reasoned with and have a switch built in :)
<whyrusleeping>
hes generally pretty reasonable
matoro has joined #ipfs
<whyrusleeping>
hoenir: if you want, you can pick a given package and make it golint compliant
<whyrusleeping>
Thats generally 100% non-controversial ;)
<lemmi>
probably not a bad idea to have a look at the tree while also doing something useful :)
robattil1 has joined #ipfs
robattila256 has quit [Ping timeout: 260 seconds]
cxl000 has quit [Ping timeout: 260 seconds]
cxl000 has joined #ipfs
amdi_ has joined #ipfs
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
infinity0 has quit [Ping timeout: 240 seconds]
infinity0_ has joined #ipfs
infinity0_ has quit [Changing host]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
engdesart has joined #ipfs
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
hoenir has quit [Quit: leaving]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
robattil1 has quit [Quit: WeeChat 1.7]
kvda has joined #ipfs
tmg has quit [Ping timeout: 264 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
cxl000 has quit [Ping timeout: 260 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
zabirauf has quit [Remote host closed the connection]
zabirauf has joined #ipfs
gully-foyle has quit [Ping timeout: 246 seconds]
dima-rostov[m] has joined #ipfs
<dima-rostov[m]>
Test
dima-rostov[m] has left #ipfs ["User left"]
zabirauf has quit [Ping timeout: 240 seconds]
zabirauf_ has joined #ipfs
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
gully-foyle has joined #ipfs
robattila256 has joined #ipfs
chris613 has left #ipfs [#ipfs]
kvda has joined #ipfs
antiantonym has quit [Ping timeout: 246 seconds]
<zabirauf_>
Hi, does the js-ipfs works in browser without connecting to an ipfs node running locally on machine?
matoro has quit [Remote host closed the connection]
matoro has joined #ipfs
matoro has quit [Remote host closed the connection]
antiantonym has joined #ipfs
skeuomorf has quit [Ping timeout: 240 seconds]
stoopkid has joined #ipfs
antiantonym has quit [Ping timeout: 260 seconds]
Foxcool has joined #ipfs
wallacoloo____ has quit [Quit: wallacoloo____]
dignifiedquire has quit [Quit: Connection closed for inactivity]
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<interfect[m]>
whyrusleeping: I comment everything I ever write. I always have to write out what I'm doing in comments first, so I know it's actually a well-formed idea
<interfect[m]>
The other reason you need comments is that code can be very explicit on the what but provides absolutely no indication as to the why
<interfect[m]>
So you need to say things like "move this pointer so we preserve our invariant that blah"
<interfect[m]>
Or "We're not allowed to do this the first way that occurs to us because that would violate this other precondition"
<interfect[m]>
It's easy to look at code with no comments and think that you understand it and that it's correct, but then if you go through and write out justification for what's happening, you will find that a crucial step is missing, or that some code is superfluous and not required to accomplish what you're doing, or that the whole approach is wrongheaded and can never work.
<interfect[m]>
Whereas if you have just code all you're thinking about is the how and the what
caiogondim has joined #ipfs
muvlon has quit [Ping timeout: 256 seconds]
_whitelogger has joined #ipfs
enfree has quit [Ping timeout: 240 seconds]
muvlon has joined #ipfs
spacebar_ has quit [Quit: spacebar_ pressed ESC]
kvda has joined #ipfs
kvda has quit [Client Quit]
robattila256 has quit [Ping timeout: 268 seconds]
_whitelogger has joined #ipfs
chungy_ has joined #ipfs
chungy has quit [Ping timeout: 256 seconds]
athan has quit [Remote host closed the connection]
rendar has joined #ipfs
chungy_ is now known as chungy
engdesart has quit [Quit: engdesart]
tmg has joined #ipfs
_whitelogger has joined #ipfs
kevina has quit [Read error: Connection reset by peer]
zabirauf_ has quit [Ping timeout: 268 seconds]
tmg has quit [Ping timeout: 260 seconds]
antiantonym has quit [Ping timeout: 240 seconds]
cxl000 has joined #ipfs
antiantonym has joined #ipfs
kevina has joined #ipfs
Caterpillar has joined #ipfs
reit has quit [Ping timeout: 240 seconds]
gaf_ has joined #ipfs
dignifiedquire has joined #ipfs
Encrypt has joined #ipfs
Encrypt has quit [Client Quit]
maxlath has joined #ipfs
wallacoloo____ has quit [Quit: wallacoloo____]
Oatmeal has quit [Read error: Connection reset by peer]
Oatmeal has joined #ipfs
<horrified>
what's wrong with archives.ipfs.io? barely any links manage to load
nick[m] has joined #ipfs
stoopkid has quit [Quit: Connection closed for inactivity]
Ronsor has quit [Ping timeout: 260 seconds]
ZaZ has joined #ipfs
maxlath has quit [Ping timeout: 264 seconds]
tmg has joined #ipfs
caiogondim has quit [Quit: caiogondim]
cxl000 has quit [Ping timeout: 256 seconds]
cxl000 has joined #ipfs
gmcabrita has joined #ipfs
Encrypt has joined #ipfs
warbo` has joined #ipfs
warbo`` has joined #ipfs
warbo has quit [Ping timeout: 258 seconds]
warbo` has quit [Ping timeout: 268 seconds]
__uguu__ has joined #ipfs
__uguu__ has quit [Client Quit]
__uguu__ has joined #ipfs
__uguu__ has joined #ipfs
__uguu__ has quit [Changing host]
__uguu__ has left #ipfs [#ipfs]
dignifiedquire has quit [Quit: Connection closed for inactivity]
dignifiedquire has joined #ipfs
warbo``` has joined #ipfs
warbo``` has left #ipfs [#ipfs]
warbo`` has quit [Ping timeout: 268 seconds]
Monokles has quit [Ping timeout: 260 seconds]
Encrypt has quit [Quit: Quit]
melvster has joined #ipfs
<melvster>
can someone explain how the # character is used in ipfs, e.g. to pull in an app?
skeuomorf has quit [Ping timeout: 240 seconds]
Monokles has joined #ipfs
<deltab>
can you give an example?
<melvster>
deltab, I was watching the ipfs video and it showed a movie being played but it also pulled in the movie player, by using the # character and downloading the player itself from ipfs ...
<melvster>
but looks like that might not be yet aligned with multihash
<daurnimator>
That's a link to "Hash Algorithms"
<daurnimator>
"Encryption Algorithms" further down is ciphers; but it's not a very large list
<melvster>
oh you meant ciphers for encryption?
gpestana has quit [Quit: Connection closed for inactivity]
<melvster>
yeah i know, IANA isnt that well updated, but its probably the top level for such registries of that kind ... im kind of new here, so was just passing on what I know ... multi hash probably has its own list, but would be nice to get everything in sync :)
<daurnimator>
I would at least expect to see AES-{128,192,256}-{CBC,OCB,GCM} and XSalsa20 on such a list
<melvster>
true
<melvster>
is just a case of people volunteering to send in updates
<daurnimator>
updates to what?
<melvster>
the list
<melvster>
anyway i probably didnt answer your question, apologies for the distraction ...
<daurnimator>
the EAP-POTP list?
<daurnimator>
I'm not sure my requirements are the same as theirs...
<daurnimator>
and it wouldn't be the right place either: e.g. due to how ipsec works, a CFB cipher wouldn't make any sense: you'd mainly want CBC
dimitarvp has joined #ipfs
maxlath has joined #ipfs
<SchrodingersScat>
Kubuxu: 07:48:11.750 ERROR commands/h: open /home/anon/.ipfs/blocks/Y6/put-579985381: too many open files client.go:247 https://www.youtube.com/watch?v=2nXGPZaTKik The hilarious part was this was part of an old script before IPFS was crippled and so it deletes the files after attempting to insert. I wasn't testing for failure, oops.
<Kubuxu>
:(
<Kubuxu>
It is funny, as it broke because we fixed DHT
<Kubuxu>
other way to work around is to run `ipfs daemon --offline` for the add
<SchrodingersScat>
it is funny, my ":|" face as I see my script proudly announce that it failed not even 1/3 way in, and then removed the file
<SchrodingersScat>
daemon can be offline for an add?
<SchrodingersScat>
oh, i see, that's offline mode so no dht to clog my files open?
<Kubuxu>
yeah
<Kubuxu>
or `ipfs daemon --routing=none`
<SchrodingersScat>
I still prefer in bash: until ipfs add -w "$file" ; do sleep .5 ; done
<melvster>
I got the 'too many open files' error, but it didnt delete the file (phew!) :)
<SchrodingersScat>
it'll get there eventually, maybe not today, maybe not next week, but eventually
<SchrodingersScat>
melvster: yeah, I'm dumb, it tests for completion now.
jkilpatr has joined #ipfs
<melvster>
So I think I have made a linkage between the IPFS and HTTP meta data :
chungy has quit [Read error: Connection timed out]
chungy has joined #ipfs
nick[m] is now known as nick2000
spacebar_ has quit [Quit: spacebar_ pressed ESC]
stoopkid has quit [Quit: Connection closed for inactivity]
anewuser has joined #ipfs
spacebar_ has joined #ipfs
ioth1nkn0t has quit [Quit: leaving]
arpu has quit [Ping timeout: 246 seconds]
Foxcool has quit [Ping timeout: 256 seconds]
caiogondim has joined #ipfs
hoenir has joined #ipfs
<lidel>
melvster, I hinted at using .well-known some time ago, but it is “not intended for general information retrieval or establishment of large URI namespaces” https://tools.ietf.org/html/rfc5785#page-3
<lidel>
melvster, there is already established standard for providing multiple alternative sources for the same resource called "Metalink" https://en.wikipedia.org/wiki/Metalink
<melvster>
sameAs I think would work equally well here, and can also go into link headers, but modifying link headers can be a challenge ...
jedahan has joined #ipfs
<melvster>
I think at this point it would be easier to look in a well known location than to modify link headers. Though longer term link headers are probably the way to go ...
<Guest57795>
WebTorrent
<melvster>
what I think I'll do is rel="describedBy" and then point to a .well-known meta file for that hash
kthnnlg has joined #ipfs
Mateon1 has quit [Ping timeout: 260 seconds]
Mateon1 has joined #ipfs
<hoenir>
can anyone explain why we don't use a well known testing api ? like go gocheck?
<lidel>
it only mentions rel=duplicate, but multiple Links with rel=describedby could be used at the same time
<hoenir>
exit
hoenir has quit [Quit: leaving]
<melvster>
very nice
<melvster>
i can also put that rel="duplicate" into the body of the meta data ... tho there's an element of syntactic sugar there, I think it would work
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<melvster>
if i have a list of hashes on my machine, is it public to everyone, or only if you know the exact hash?
<whyrusleeping>
melvster: have you added the list of hashes to ipfs?
hoenir has joined #ipfs
<melvster>
lets say, yes
<whyrusleeping>
then i could see a dht provider record for that object, fetch it from you, and have the list
<whyrusleeping>
but i would have to be crawling the dht
<melvster>
ok, thanks
<Mateon1>
whyrusleeping: I saw that a while ago, IMO: Delegated routing - no, Trackers - ??? (need to think more, Trackers are not enough by themselves, but might be nice to lower strain on DHT), Hybrid: I like that idea! Might be tricky to implement, might need a reputation system to see which nodes are useful & accurate and which ones are not (or malicious).
<whyrusleeping>
Yeah.... thats basically my thoughts on the matter at this point too
<whyrusleeping>
i like the hybrid idea, but its going to be hard
<whyrusleeping>
i also like the idea of sharding the dht
<whyrusleeping>
but thats also hard
krzysiekj has quit [Quit: WeeChat 1.5]
<Mateon1>
I don't think sharding a DHT is feasible
<whyrusleeping>
it would be sharding the keyspace
<whyrusleeping>
basically, changing the behaviour of the 'Get closest peers' functionality
mguentner2 is now known as mguentner
<Mateon1>
So, does that mean having N separate DHTs, each containing only keys ==X(mod N)?
<Mateon1>
I don't see how that helps
<whyrusleeping>
It helps because it makes the number of rpcs required to put large amounts of records out much smaller
<whyrusleeping>
i can batch provider calls together
<melvster>
need to give everyone who helps the network crypto coins =)
<whyrusleeping>
yeah, incentivized dhts are an idea we've been wanting for a while
<Mateon1>
Right... Can't you batch records without sharding the DHT though?
<whyrusleeping>
Mateon1: its much harder
<whyrusleeping>
because each record goes to a very specific subset of peers
krzysiekj has joined #ipfs
<whyrusleeping>
at scale, no two records go to the same subset
<lemmi>
syncthing used relaying for a while and it contributed quite a lot to the network overall. also people don't seem to have a problem with running relay nodes. i even run one myself. it's more or less an intermediate solution, it does not the quite the same thing you would need, but maybe it's a tool to get get some relief for some time.
maxlath has joined #ipfs
<Mateon1>
lemmi: No.. that just helps with bypassing NATs
<whyrusleeping>
yeah, relays just help with nats
<whyrusleeping>
cjdns switched to supernodes
<whyrusleeping>
(aka, delegated routing)
<Mateon1>
whyrusleeping: Has it already? Is that a new protocol version?
<whyrusleeping>
Mateon1: yeah, fairly certain that happened already
<whyrusleeping>
lgierth: Kubuxu right?
* Mateon1
checks cjdns repo
<Kubuxu>
whyrusleeping: not really
<Kubuxu>
dht is still the default
<Mateon1>
Ah, yep. v19
<Kubuxu>
but supernode routing is being worked on
<Mateon1>
I'm still on v18
<Kubuxu>
and can work in parallel with DHT
<Mateon1>
Actually, I'm not on any version since I reinstalled..
<Kubuxu>
you delegate DHT queries to your supernode
<Mateon1>
Are supernodes manual, or automatically discovered?
Encrypt has quit [Quit: Quit]
<Kubuxu>
manual
<Kubuxu>
as you have to partially thrust your supernode
subtraktion has quit [Ping timeout: 256 seconds]
wallacoloo____ has left #ipfs ["Good Bye"]
<melvster>
oops disk filled up ..
MrControll has joined #ipfs
<nicolagreco>
hey IPFS, is there a way for me to connect and send files to a peerId that I know of?
<nicolagreco>
even the other way around is fine: like pinning to an ipns and being always pulling
<nicolagreco>
I can just make a little cron that keeps on polling, that is fine too
<hoenir>
whyrusleeping: could you please tell me how can I add into go-ipfs a dependencie?
<hoenir>
using gx of corse
<whyrusleeping>
hoenir: gx init and publish the new dependency, then gx import it in go-ipfs
<hoenir>
where do I hit the gx init ? in github.com/ipfs/go-ipfs/ ?
<whyrusleeping>
in your dependency
<whyrusleeping>
its like npm, a package needs to be declared as a package (so it can specify its own deps)
<whyrusleeping>
You can also just use normal dependencies (if you just want to test things out)
<hoenir>
I want to be sure if I give my branch to someone, he/she can hit the normal stuff like make install and install the project + my dependencie that I've added.
<Mateon1>
nicolagreco: Can't push files without something like ipfs-cluster
<Mateon1>
You can pull with repeated `ipfs pin /ipns/Qmwhatever`
<Mateon1>
pin add*
<hoenir>
I think I resolved, thanks whyrusleeping
infinity0 has quit [Remote host closed the connection]
<whyrusleeping>
Or you can ask for its hash to be pinned here by pinbot
Guest57795 is now known as ronsor
ronsor has quit [Changing host]
ronsor has joined #ipfs
infinity0 has joined #ipfs
<hoenir>
whyrusleeping: I don't own the dependencie.
<hoenir>
to be more specific this is gopkg.in/check.v1
<Kubuxu>
tell me what it is, I will fork it in gxed org (org where we keep gxified packages).
<Kubuxu>
or keep your own fork
<Kubuxu>
it doesn't matter much
<Kubuxu>
just be warned, we have explicitly not used testing lib
<hoenir>
I know
<AphelionZ>
what's up, y'all? What's the keygen lib I should be using?
<hoenir>
When I saw the code I was disappointed
<AphelionZ>
for javascript browser-based keygen
<Kubuxu>
AphelionZ: js-libp2p-crypto iirc
<AphelionZ>
Kubuxu: thanks!
<Kubuxu>
hoenir: re test lib: https://github.com/ipfs/go-ipfs/issues/3498 if someone did throughout analysis, compared pros and cons, showed how example existing tests can be improved (especially the more complex ones) I think we could settle on some lib
cxl000 has quit [Quit: Leaving]
<Kubuxu>
not sure though
<hoenir>
Kubuxu: thanks for the link I will send my first PR soon
<Kubuxu>
right off the bat, no guarantees that it will be merged
<Kubuxu>
hoenir: in this case, agree it should have been pointed out in comment, but also checking git blame would give you the reason for it.
<Kubuxu>
also your PR comment just did what many code comments do
<Kubuxu>
it got out of date
<hoenir>
?
<hoenir>
it got out of date
<hoenir>
?
<hoenir>
file, line number.
palkeo has quit [Quit: Konversation terminated!]
<Kubuxu>
sorry
<Kubuxu>
I thought you skipped the rlimit thingy completly
skeuomorf has joined #ipfs
spacebar_ has joined #ipfs
<Kubuxu>
mind changing the Rlimit thingy a bit, extract the rlimit setting and getting function (so they are only platform dependent) and keeping the checking logic in one place?
<hoenir>
Kubuxu: please, comment exactly in the PR
<Kubuxu>
k
<hoenir>
Feel free to add comments, review..
tpae has quit [Ping timeout: 260 seconds]
<Kubuxu>
hoenir: do you have your daemon running?
<hoenir>
a sec.
<hoenir>
Daemon is ready
<hoenir>
Initializing daemon...
<hoenir>
Swarm listening on /ip4/79.112.254.61/tcp/4001
<hoenir>
Swarm listening on /ip4/192.168.1.103/tcp/4001
<hoenir>
Swarm listening on /ip4/192.168.1.101/tcp/4001
<hoenir>
Swarm listening on /ip4/127.0.0.1/tcp/4001
<hoenir>
Swarm listening on /ip6/::1/tcp/4001
tilgovi has joined #ipfs
<hoenir>
API server listening on /ip4/127.0.0.1/tcp/5001
<hoenir>
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080