lgierth changed the topic of #ipfs to: go-ipfs v0.4.8 is out! https://dist.ipfs.io/#go-ipfs | Week 13: Web browsers, IPFS Cluster, Orbit -- https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/vVBS0
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
appa has quit [Ping timeout: 240 seconds]
yoyogo has joined #ipfs
TryTooHard has quit [Ping timeout: 240 seconds]
<drewolson> should `ipfs name publish <some hash>` take a while?
ljhms has quit [Ping timeout: 240 seconds]
<drewolson> just finished, took probably 30 seconds or so
<whyrusleeping> drewolson: yeah, if you arent well connected in the network yet, it can take a little bit
<whyrusleeping> theres definitely some optimization to do on that end too
<drewolson> whyrusleeping it also seems to be taking quite a while to view the newly published hash via the gateway
<drewolson> it's spinning as we speak
<drewolson> ah, just resolved
<drewolson> whyrusleeping: while i've got you, how much bandwidth can i expect an ipfs node to consume? i'm trying to decide if i should run it on my digital ocean droplet
ljhms has joined #ipfs
<drewolson> interestingly, even `ipfs ls <hash>` takes a while from my local machine
<whyrusleeping> drewolson: hard to say exact numbers, it varies day to day. But its a bit higher than i want it to be. Most of our infra nodes run on DO, and don't get any bandwidth issues
<whyrusleeping> drewolson: yeah, ipns right now is set to be as secure as it can be
<whyrusleeping> it doesnt make any assumptions
<whyrusleeping> so it checks the network to see if a more valid record than your own exists for your key
<whyrusleeping> (which seems weird, but you can share keys and others can publish names)
<drewolson> whyrusleeping: ah, thanks. makes sense.
<whyrusleeping> figuring out a UX for the speed/security tradeoff is hard
<whyrusleeping> what should the command to "resolve the latest entry and make sure its the latest possible entry" be
<drewolson> yeah, fair point
<whyrusleeping> and what should the command to "just give me a value i don't care" be?
<drewolson> might be nice to have some hard cap at least (the ls is still hanging)
<whyrusleeping> i think theres a 30 second hard cap
<whyrusleeping> should be at least
<drewolson> it seems i can also generate more than one key on a node to publish multiple shas, yes?
<drewolson> that's pretty cool
<whyrusleeping> yeap!
<drewolson> well, regardless of current performance, the ideas here are very exciting
<drewolson> ipfs seems far more fleshed out than ssb
<drewolson> and the idea of aggressive local gc seems very good for local cache size
<whyrusleeping> :)
<whyrusleeping> Theres definitely still a lot to do
<whyrusleeping> But its a lot of fun, and i always enjoy hearing from new users trying things out
<drewolson> the concepts make a lot of sense. ipns is particularly interesting to me. it all feels like persistent data structures for the web :)
<drewolson> with ipns being a local variable binding
<whyrusleeping> yep! its exactly that
<whyrusleeping> its a pointer
<drewolson> are there ideas in the pipeline for making ipns more performant? or is it mostly about allowing the node to specify the "trade offs" associated with the request
<drewolson> for example, i published an ipfs link on my homepage to my homepage in ipfs https://drewolson.org/
<drewolson> i don't care that it's super up to date, but i don't want to have to change the hash if i push new content, so ipns felt like a good idea
<drewolson> but the ipns loading seems prohibitively long right now. so perhaps i should go back to a single ipfs hash
<drewolson> and just update things as i update my site
<drewolson> except that i'd have to know the hash to refer to itself in the site _before_ it is pushed :/
<whyrusleeping> hah
<whyrusleeping> yeah... it does seem to be taking a longer time than usual
<whyrusleeping> weird
JayCarpenter has quit [Quit: Page closed]
<drewolson> whyrusleeping is 0.4.8 the latest version of the go client?
<drewolson> i'm definitely seeing more than 30 seconds on resolves. will gist in a few.
<drewolson> (if it ever finishes)
<drewolson> ah, so this is interesting
<drewolson> if i prefix the command with /ipns/
<drewolson> it resolves much faster
<drewolson> without the prefix, it seems to hang
_rht has joined #ipfs
<whyrusleeping> drewolson: way
<whyrusleeping> thats really weird
<whyrusleeping> and definitely worth reporting as a bug
skeuomorf has joined #ipfs
<lgierth> kiboneu achin: i migrated the certs to gandi and all is good now
<kevina> whyrusleeping: could use your feedback on several issues...
Akaibu has joined #ipfs
<kevina> https://github.com/ipfs/go-ipfs/pull/3575 although this one is looking a bit evolved and I am not sure if I am the best one to handle it...
talonzx[m] has joined #ipfs
<kiboneu> yay! thanks lgierth
<lgierth> sorry for the inconvenience, i hope you have a pleasant journey
<achin> lgierth: \o/
<drewolson> can someone validate an assumption for me -- once i add a file to ipfs, i can stop running my local node and this file should still be accessible to any node in the network. yes?
<deltab> drewolson: only if someone else requested it and received it, making it available from their own node
<drewolson> deltab: ah. i see. so there can still be single points of failure
<deltab> you add a file to your own node's store, not anyone else's (unless they've agreed to mirror it)
<deltab> yes, for content that's not popular enough to be replicated
<drewolson> so i have to keep my node running all the time in order to keep that file available on the network
<drewolson> deltab when you say "replicated", you mean pinned?
ZarkBit has quit [Remote host closed the connection]
<deltab> just requesting something is enough temporarily; but without pinning the content will at some point be flushed from the cache
<drewolson> understood.
<drewolson> if i have the hash of an object and i want to get the hash of the directory it is stored in, how would i do that via the command line?
<deltab> which directory? there could be many
<drewolson> deltab i did an `ipfs add -r dir`
<deltab> there's currently no mechanism for doing that, that I know of
<drewolson> and right now, i have the hash for one of the files in this dir
<deltab> you'd have to search dirs for it
<whyrusleeping> kevina: ACK
<drewolson> another question -- by default, what key is used when you `ipfs name publish <thing>`
<drewolson> is it a key that is generated during init?
<drewolson> or something else
<whyrusleeping> drewolson: your nodes key, "self"
ryantm has joined #ipfs
shizy has joined #ipfs
<whyrusleeping> kevina: responded
ygrek has quit [Ping timeout: 240 seconds]
char has joined #ipfs
char has left #ipfs [#ipfs]
<kevina> thanks
robattila256 has joined #ipfs
Soulweaver has quit [Read error: Connection reset by peer]
Soulweaver has joined #ipfs
asyncsec has joined #ipfs
shizy has quit [Ping timeout: 240 seconds]
spilotro has quit [Ping timeout: 268 seconds]
asyncsec has quit [Ping timeout: 268 seconds]
spilotro has joined #ipfs
museless has joined #ipfs
dimitarvp has quit [Read error: Connection reset by peer]
jaboja has quit [Remote host closed the connection]
Nycatelos has quit [Ping timeout: 264 seconds]
<dsal> Feature I wish I had now: pin --for=time.Duration
pescobar has quit [Remote host closed the connection]
SuprDewd has quit [Quit: No Ping reply in 180 seconds.]
SuprDewd has joined #ipfs
<jbenet> dsal: yeah pls. maybe pin --duration=time.Duration
Nycatelos has joined #ipfs
pescobar has joined #ipfs
<alu> wew
infinity0 has joined #ipfs
palkeo has joined #ipfs
palkeo has joined #ipfs
palkeo has quit [Changing host]
Guest193196[m] has quit [Ping timeout: 245 seconds]
Guest193196[m] has joined #ipfs
TryTooHard has joined #ipfs
<jamesstanley> I have written this blog post: http://incoherency.co.uk/blog/stories/bitaddress-on-ipfs.html
<jamesstanley> partly because it is interesting, and partly to evangelise ipfs to the bitcoin community
<jamesstanley> feedback appreciated
<whyrusleeping> !pin /ipfs/QmYz4bkhtdkGRLFtago2hoNucrfUy5bHwrun8N8XJrrKWA/ bitaddress.org snapshot
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmYz4bkhtdkGRLFtago2hoNucrfUy5bHwrun8N8XJrrKWA/
<jamesstanley> :D
<TheGillies> does that bot need admin access?
<whyrusleeping> !pin /ipfs/QmSaJUuCesYDpxLbHrrKA3m2HjN4YcufhAtDH2achrY97z/ stegoseed snapshot 24886cd
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmSaJUuCesYDpxLbHrrKA3m2HjN4YcufhAtDH2achrY97z/
<whyrusleeping> TheGillies: you have to be on its friends list
<whyrusleeping> jamesstanley: :)
<TheGillies> !ping QmSW6WxNiwZpxKTbYgRBty4shQzrrMAs5L8eHfib4FULzy why
<TheGillies> doh
<Stskeeps> jamesstanley: cool
<TheGillies> !pin /ipfs/QmSW6WxNiwZpxKTbYgRBty4shQzrrMAs5L8eHfib4FULzy why
<whyrusleeping> !pin /ipfs/QmaF4DKh6nyYfrboYRvzbJarY9ntRUup2xPqs2DpbaoWnh/ bip39 tool 0d11505
<pinbot> now pinning on 8 nodes
<TheGillies> pinbot: y u no like me
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmaF4DKh6nyYfrboYRvzbJarY9ntRUup2xPqs2DpbaoWnh/
<whyrusleeping> TheGillies: heh, because pinbot doesnt quite have enough storage to make it more open to everyone pinning stuff yet
<jbenet> pin the blog too -- resolve /ipns/QmdnD2bUSWcZorAwcTs7rftNm19YZXLWkmdefsvrjdMmAe/
<jbenet> jamesstanley: looking good! cant dive deep atm, but thanks for writing
<TheGillies> ipfs resolve -r /ipns/QmdnD2bUSWcZorAwcTs7rftNm19YZXLWkmdefsvrjdMmAe/: Could not resolve name.
<jamesstanley> hmm that's not ideal
gmcabrita has quit [Quit: Connection closed for inactivity]
<jamesstanley> I wonder if my node has ran out of memory again
<whyrusleeping> jbenet: we really need a service that republishes ipns records for people
<whyrusleeping> would be really cool
<jamesstanley> it's working now, I didn't do anything
<jamesstanley> (I run ipfs daemon in a bash while loop, because it keeps running out of memory)
<jamesstanley> but doesn't look good in an article all about how great ipfs is if the blog can't be resolved over ipfs
<whyrusleeping> !pin /ipfs/QmUmDFVDDMnsjvK2sbHDHrCV8eU3fZ7gySNRKad3VTCN9U jamesstanleys blog
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmUmDFVDDMnsjvK2sbHDHrCV8eU3fZ7gySNRKad3VTCN9U
<jbenet> whyrusleeping we really need a working ipns
<jbenet> jamesstanley: agreed.
<TheGillies> jbenet++
<whyrusleeping> jbenet: it works
<whyrusleeping> its just using the parameters you told me to use :P
<jbenet> no, that's not the problem here
<TheGillies> is there a command to see what's pinned?
<jbenet> the problem is the model. the dht is not the right abstraction for it in a network capable of consensus, or a network that needs subsecond resolution
<whyrusleeping> using a dht *was* one of the parameters initially ;)
<whyrusleeping> but yeah, we need to find something better
<TheGillies> yeah was sad when i got stoked on ipns then realized i had to wait around awhile for every lookup
<TheGillies> ended up just throwing records into dns
<jbenet> dns has been surprisingly good here
<jbenet> bbiab
btmsn has joined #ipfs
museless has quit [Ping timeout: 260 seconds]
<TryTooHard> Hi jbenet
Foxcool__ has joined #ipfs
rendar has joined #ipfs
TryTooHard has quit [Ping timeout: 260 seconds]
palkeo has quit [Ping timeout: 240 seconds]
Foxcool__ has quit [Ping timeout: 240 seconds]
ryantm has quit [Quit: Connection closed for inactivity]
Foxcool__ has joined #ipfs
sirdancealot has joined #ipfs
sirdancealot has quit [Ping timeout: 258 seconds]
akkad has quit [Excess Flood]
akkad has joined #ipfs
sirdancealot has joined #ipfs
petroleum has joined #ipfs
petroleum has quit [Max SendQ exceeded]
petroleum has joined #ipfs
petroleum has quit [Max SendQ exceeded]
Mateon1 has quit [Ping timeout: 240 seconds]
petroleum has joined #ipfs
Mateon1 has joined #ipfs
ylp has joined #ipfs
maxlath has joined #ipfs
petroleum has quit [Quit: petroleum]
<musoke[m]> I'm having trouble accessing https://ipfs.io/blog/24-uncensorable-wikipedia/ with firefox and the IFPS extension.
<musoke[m]> All I get is `Path Resolve error: no link named "24-uncensorable-wikipedia" under QmT41msKzvzAWMosVdmY3jqnm2oQBbxUtHSNCZJ9FydkXH`. Looks like an IPNS issue?
maxlath has quit [Ping timeout: 240 seconds]
maxlath has joined #ipfs
sirdancealot has quit [Ping timeout: 255 seconds]
<Kubuxu> disable dnslink redirect
mildred3 has quit [Ping timeout: 240 seconds]
dragrope has joined #ipfs
mahloun has joined #ipfs
espadrine` has joined #ipfs
mildred3 has joined #ipfs
Guest31939 has joined #ipfs
e337 has quit [Quit: Lost terminal]
jungly has joined #ipfs
Donald has joined #ipfs
<Donald> Hi Kubuxu
<Donald> Kubuxu: I am the person submitting the recent pull request
<Kubuxu> which one
<Kubuxu> aah
<Kubuxu> skeain
<Kubuxu> see files: sum_test.go and multihash_test.go
ianopolous has quit [Remote host closed the connection]
maxlath1 has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
maxlath1 is now known as maxlath
kaotisk has joined #ipfs
<Donald> Kubuxu: I am a rookie at Go, more of a Python/javascript/C person... was hoping that someone can help improve the code
<Kubuxu> the code looks quite good
<Kubuxu> look at those files how other hashes are tested and just do similar tests for your hash
<Donald> Go is very hard to install when it comes to home directory, it does have conflicts with SciPy's Jupyter Notebook system though
<Donald> Different applications using the same variable...
<Kubuxu> what do you mean? if it is about GOPATH then set it to export GOPATH=$HOME/go
<Donald> Got it
<Donald> I am now trying to work with gx
<Kubuxu> don't worry about it, I can do that for you when you fix the tests
<Donald> "Command not found" for gx after "go get -u github.com/whyrusleeping/gx" and "go get -u github.com/whyrusleeping/gx-go"
<Donald> can't "gx init"
sirdancealot has joined #ipfs
Foxcool__ has quit [Ping timeout: 246 seconds]
<Kubuxu> the bins will be in $GOPATH/bin so you have to add it to your path
<Donald> Kubuxu: I did "export GOPATH=~/.go" in .bashrc, is that okay?
<Kubuxu> yeah and add do export PATH="$GOPATH/bin:$PATH"
<Kubuxu> and it is best to use $HOME instead of ~ in init scripts
<Donald> Kubuxu: but for my python i already have "export PATH="/home/brad/anaconda3/bin:$PATH""
<Kubuxu> it won't conflict
<Mateon1> Donald: Add that as a separate line
<Kubuxu> Mateon1: can you guide him through? I gtg
<Donald> Mateon1: How does it work?
<Mateon1> Kubuxu: Yep, I can do that
<Mateon1> Donald: export is a builtin function in bash, which sets variables
skeuomorf has quit [Ping timeout: 240 seconds]
<Mateon1> So, here you're setting the PATH variable to $GOPATH/bin: and then the previous PATH variable
<Mateon1> So you are effectively prepending "$GOPATH/bin" to the path
<Mateon1> The same goes for the Python anaconda line
<Donald> Mateon1: so what happens when Go invoke the variable? How does it know which is which?
<Mateon1> Let's say your path before running that command is just '/bin:/usr/bin' (it's probably more complex than that)
skeuomorf has joined #ipfs
<Mateon1> When you run the Anaconda line, you are setting path to /home/brad/anaconda3:/bin:/usr/bin
<Donald> Mateon1: So it is basically linking?
<Mateon1> And when you run the Go path line, you set it to $GOPATH/bin:/home/brad/anaconda3:/bin:/usr/bin - adding it in front
<Mateon1> Donald: Well, the PATH variable is special in bash, it tells bash where to look for executable files. If you say `cat somefile`, bash has to look for the `cat` executable in the directories specified in the special PATH variable
<Mateon1> In this case, the `gx` executable doesn't exist in /usr/bin, or /bin, so bash cannot find it
<Mateon1> If you add $GOPATH/bin to the path, bash also searches that directory for executables, and it finds gx there! So it runs
<Donald> Mateon1: "imports context: unrecognized import path "context" what?
<Mateon1> Huh, this seems like some other issue, what command caused this error?
<Donald> Mateon1: "go get -u github.com/whyrusleeping/gx"
cxl000 has joined #ipfs
<Mateon1> Donald: That's rather odd, for now, try running `go get -d github.com/whyrusleeping/gx`
<Mateon1> Donald: Once you do that, type `cd $GOPATH/src/github.com/whyrusleeping/gx` and then `go install`
Foxcool__ has joined #ipfs
<Donald> Mateon1: "package context: unrecognized import path "context"" same problem, is it that I need to mkdir?
<Mateon1> Hm, that's odd
<Mateon1> Let me try doing that with a clear go path
_rht has quit [Quit: Connection closed for inactivity]
<Mateon1> Ah, this might be caused by having an old version of Go, can you tell me what's your go version by typing `go version`?
<Donald> Mateon1: 1.2.1
<Mateon1> That is extremely old, gx required Go 1.7 or later, can you try to find a more up-to-date version?
<Mateon1> requires*
<TUSF> That's old... Like, there's a 1.x version every 6 months, and 1.8 was a couple months ago too. That's like, 3-4 years old?
<Donald> Mateon1: Linux Mint repos are wayyyyy behind
<TUSF> Yeah, repos tend to do that... Heck, Ubuntu repos are also at like 1.6, so...
<Mateon1> Donald: Can you try downloading a .tar.gz version from https://golang.org/dl/ ?
<Mateon1> If your system is 64 bit, you want go1.8.1.linux-amd64.tar.gz, otherwise go1.8.1.linux-386.tar.gz
<Donald> Mateon1: just direct download, unzip and /usr/local ?
<Mateon1> Yep, there should also be a README included that details the process
<Mateon1> After you unpack the tar.gz file
xiengu has joined #ipfs
<Donald> Mateon1: Done, still 1.2.1
<Mateon1> Can you paste the result of `echo $PATH`?
<Mateon1> You might also want to uninstall the existing Go installation from your package manager
skeuomorf has quit [Ping timeout: 245 seconds]
xelra has quit [Remote host closed the connection]
<Mateon1> Seems the linux mint package manager is `apt`, so to uninstall, you want to do `sudo apt-get remove --purge golang-go`. After that, if /usr/local/bin is in your PATH variable, you should have the up-to-date version available on your command line
xelra has joined #ipfs
<Donald> Mateon1: Haven't reinstalled, but "go version xgcc (Ubuntu 4.9.3-0ubuntu4) 4.9.3 linux/amd64" is weird!
<Mateon1> Hm, that is weird, what does `which go` output?
<Mateon1> Donald: Ah, you seem to have gccgo installed, as with this answer: http://stackoverflow.com/a/29620655/2421067 - try running `sudo apt-get remove gccgo`
Foxcool__ has quit [Ping timeout: 240 seconds]
<Donald> Mateon1: "/usr/bin/go" does not exist, should I make it?
<Mateon1> No, it should be /usr/local/bin/go
<Mateon1> Can you paste your PATH variable here? Type `echo $PATH` to get it
<Donald> Mateon1: /home/brad/.go/bin:/home/brad/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
<Mateon1> Okay, so /usr/local/bin is already in path, what does `go version` say?
<Donald> Mate
<Donald> Mateon1: bash: /usr/bin/go: No such file or directory
<Mateon1> Hm... That's really odd
<Mateon1> What's the output of `which go`?
<Donald> Mateon1: Nothing
xelra has quit [Remote host closed the connection]
<Mateon1> That's really confusing, hold on for a minute while I google
<Mateon1> Ah, okay, that seems to be a different implementation of `which` that doesn't show error messages if it can't find a binary
bingus has quit [Ping timeout: 240 seconds]
<Mateon1> You should copy the files from the unpacked .tar.gz archive to /usr/local again
<Kubuxu> hash -r
<Kubuxu> Donald: ^^
<Kubuxu> locations of files in path are cached
bingus has joined #ipfs
<Mateon1> Kubuxu: Ah, that's interesting
<Mateon1> Didn't know that
gmcabrita has joined #ipfs
<Donald> Mateon1: With export GOPATH="$HOME/.go" export PATH="$GOPATH/bin:$PATH, and using "sudo tar -C /usr/local -xzf go1.8.1.linux-amd64.tar.gz", when I check "go version", it says go is not installed
<Donald> "hash -r" returns nothing
<Mateon1> hash -r makes bash forget the previous locations, so if you try typing `go` now, it should work
<Donald> Mateon1: No dice. Still says go is not installed
<Mateon1> Ah, I know what the issue is
<Mateon1> The tar file has everything in it under the ./go/ directory
<Mateon1> So, all the Go tools are located in /usr/local/go/bin, which is not in path
<Mateon1> Wait, no, that's correct
<Mateon1> You have to add /usr/local/go/bin to you PATH, in the same place as you add anaconda3 and $GOPATH/bin
<Mateon1> to your*
xelra has joined #ipfs
<Donald> Mateon1: wait...
<Donald> export PATH="/home/brad/anaconda3/bin:$PATH" export GOPATH="$HOME/.go" export PATH="$GOPATH/bin:$PATH"
<Mateon1> And then: export PATH="/usr/local/bin/go:$PATH"
<Mateon1> You can actually combine the lines into one
<Mateon1> Sorry, /usr/local/go/bin
<Mateon1> So, a combined line would look like: export PATH="/usr/local/go/bin:$GOPATH/bin:/home/brad/anaconda3/bin:$PATH"
<Donald> Mateon1: could it be the go path?
<Donald> *gopath
<Mateon1> No, as you would have to use sudo every time you compile or install anything in Go
anderspree_ has quit []
<Donald> Mateon1: export PATH="/home/brad/anaconda3/bin:$PATH" export GOPATH="$HOME/.go" export PATH="/usr/local/go/bin:$GOPATH/bin:$PATH"
anderspree_ has joined #ipfs
<Mateon1> Yep, that should work :)
<Donald> Mateon1: finally!
<Mateon1> Awesome, now everything from this point should be much more painless
<Mateon1> You can try: `go get -u github.com/whyrusleeping/gx` again
kaotisk has quit [Read error: Connection reset by peer]
kaotisk has joined #ipfs
mildred4 has joined #ipfs
mildred3 has quit [Read error: Connection reset by peer]
<Donald> Mateon1: Finally, it is working
<Donald> But still, I tried "go test" and it only runs the multihash_test.go
<Donald> And I can't "go run *_test.go" for any test files
<lgierth> go test ./...
Donald has quit [Ping timeout: 260 seconds]
HostFat has joined #ipfs
Atomic_dNLFb has joined #ipfs
<Atomic_dNLFb> Hi Mateon1
<Atomic_dNLFb> The wifi broke, sorry
<Mateon1> Hello, you're Donald, right?
<Atomic_dNLFb> yep
<Atomic_dNLFb> on my phone right now
<Atomic_dNLFb> Mateon1: Tried "go test sum.go" and it says the variables in multihash.go are undefined, and that Multihash itself is undefined
NeoTeo has quit []
NeoTeo has joined #ipfs
<Mateon1> I'm trying to reproduce this myself right now, to see what's happening there
Foxcool__ has joined #ipfs
<Mateon1> Hm, that's odd
<Mateon1> I don't get "Multihash is undefined", but I do get errors regarding cids and multihashes
<lgierth> go test ./... is fine here too
<lgierth> did you clone it within GOPATH?
<lgierth> i.e. the repo should be in GOPATH/src/github.com/multiformats/go-multihash
<lgierth> (a symlink will do too)
<Mateon1> For example, go test ./blocks : blocks/blocks.go:59: cannot use b.cid.Hash() (type "gx/ipfs/.../go-multihash".Multihash) as type "github.com/ipfs/go-ipfs/vendor/gx/ipfs/.../go-multihash".Multihash in return argument
<lgierth> did you vendor stuff?
<lgierth> that won't easily work
<Atomic_dNLFb> Mateon1 I copies everything in my edited repo through github
<Mateon1> lgierth: What do you mean by vendor? the vendor directory was created by gx install
<lgierth> oh. do gx install --global
<lgierth> it should be the default by now too (update gx)
<Mateon1> I did just update gx, or so I thought
<lgierth> mmh
<lgierth> i've never gotten vendor/gx/ to work
<Mateon1> gx 0.11.0
Foxcool__ has quit [Ping timeout: 260 seconds]
<Mateon1> Okay, rm -rf vendor and gx i --global fixed it
<Mateon1> Still odd, I am running latest master for gx
<Mateon1> What's also odd - I am getting 500 kB/s of kademlia traffic
dimitarvp has joined #ipfs
Foxcool has joined #ipfs
jkilpatr has joined #ipfs
<drewolson> i'm like 90% sure i'm seeing behavior both via the gateway and the command line client where the first resolution of an ipns name takes a very long time, but if i cancel that resolution and immediately resolve (or reload the page) again, it resolves immediately
<drewolson> am i crazy?
<drewolson> it's like the resolution of the first command finishes and caches the content locally, but it does not return the result to the client
<drewolson> but if i cancel and immediately rerun, it pulls the content from the local cache
<Mateon1> I am not understanding something here... IPFS seems to be a massive web of dependencies and I don't know what comes from where
<Mateon1> How does ipfs stats bw --proto work?
<Mateon1> https://github.com/ipfs/go-ipfs/blob/master/core/commands/stat.go#L154 - So it uses the string passed and calls a magical ID function
<Mateon1> Except that ID function is a string
<Mateon1> What?
<Mateon1> Can you call strings in Go?
<Mateon1> Here's the source for go-libp2p-protocol: /ipfs/QmZNkThpqfVXs9GNbexPrfBbXSLNYeKrE7jwFM2oqHbyqN/go-libp2p-protocol/protocol.go
<Mateon1> Ah, wait, it's not a function, it's a type
<Mateon1> So all the magic happens in the Reporter
<Mateon1> So, the reporter is a go-libp2p-metrics instance... But how does it register bandwidth use? All references Github search yields are useless
<lgierth> Kubuxu knows more :)
<drewolson> if folks want to try replicating what i'm seeing, try `ipfs resolve /ipns/QmdvfdvBsdS58AnQBbanwawYNiJGM9oZSXGBMthkQyTzjJ`. it should hang for a while. then ctrl+c it and immediately rerun. it seems to resolve immediately at that point.
<Mateon1> Wait, so the magic actually happens somewhere else - go-libp2p-swarm? Ugh, I hate this jumping through deps
<Atomic_dNLFb> SWARM!? from ethereum!? Talk about jumping sharks!
<Mateon1> swarm refers to a peer-to-peer swarm, same as in BitTorrent
<Mateon1> So... go-libp2p-transport.Conn wraps conns with go-libp2p-metrics bandwidth reporter, as defined in go-libp2p-swarm, called from ipfs/core/core.go, but the bandwidth reporter is actually saved on the swarm object, so I still have a lot of source code to dig through
<Mateon1> Oh, wait, it's just because Go packages are not URLs even though they try to be
dragrope has quit [Ping timeout: 240 seconds]
<Kubuxu> Mateon1: what are you using as your IDE/editor?
<Mateon1> I don't have one
Oatmeal has joined #ipfs
<Mateon1> vim or emacs, but right now just Chromium
<Kubuxu> vim is nice, and go-vim is even better, `:GoDef` to just to definition
<Kubuxu> or g] iirc
<Mateon1> I finally found this: https://github.com/libp2p/go-libp2p-metrics/blob/master/conn/conn.go#L15 but it doesn't get me any closer to finding out what protocols ipfs stats bw -t supports
Soulweaver has quit [Remote host closed the connection]
<Mateon1> Kubuxu: Well, too bad, currently I'm using vanilla vim, and when I need plugins to do something I use my emacs with evil-mode setup
<Mateon1> Actually, is there a go-mode
<Mateon1> Yep, there is a go layer, let's see what it does
<Kubuxu> Mateon1: protocol as in stream protocol
<Kubuxu> see `ipfs swarm peers --streams`
<Mateon1> Okay, that's what I was looking for
<Mateon1> Nice
<Mateon1> /floodsub/1.0.0 /ipfs/bitswap /ipfs/bitswap/1.0.0 /ipfs/bitswap/1.1.0 /ipfs/dht /ipfs/diag/net/1.0.0 /ipfs/diagnostics /ipfs/kad/1.0.0
<Mateon1> And <no protocol name> lol
<lgierth> dht and kad are separate things?!
jkilpatr has quit [Ping timeout: 255 seconds]
<Kubuxu> lgierth: old and new
<Kubuxu> used to be dht
<Kubuxu> we renamed protocol to kad
<Mateon1> My current stats are... interesting
Caterpillar2 has joined #ipfs
<Mateon1> bitswap:4GiB/3.5GiB in/out; bitswap/1.0.0: 2KiB/61MiB; bitswap/1.1.0: 17.5GiB/26GiB; dht: 16.3GiB/5.1GiB; kad/1.0.0: 90GiB/11.6GiB !!!
<Mateon1> That's over the course of about a week, my only large download was the wikipedia pin
nkhodyunya[m] has joined #ipfs
Caterpillar2 has quit [Ping timeout: 240 seconds]
Caterpillar2 has joined #ipfs
caseorganic has quit []
caseorganic has joined #ipfs
shizy has joined #ipfs
e337 has joined #ipfs
Anchakor has quit [Ping timeout: 240 seconds]
asui has quit [Ping timeout: 240 seconds]
Anchakor has joined #ipfs
Caterpillar2 has quit [Ping timeout: 268 seconds]
asui has joined #ipfs
<Kubuxu> ouch
<Kubuxu> we also found out recently that diagnostics endpoint is being missuses so we will be removing it from 0.4.10
<Kubuxu> it can also cause a lot of DHT traffic
<Atomic_dNLFb> Hi Kubuxu
<Atomic_dNLFb> this is Donald
<achin> which diagnostics endpoint?
<Atomic_dNLFb> Finally got go installed, but still struggling to run tests
<Mateon1> Atomic_dNLFb: What happens when you run go test ./...?
<Kubuxu> achin: ipfs diag net
maxlath1 has joined #ipfs
<achin> ah
<achin> well, i think it was always known that it couldn't survive forever
<Atomic_dNLFb> Mateon1 could you emphasise the file paths with spaces?
maxlath has quit [Ping timeout: 260 seconds]
maxlath1 is now known as maxlath
<Mateon1> go test ./...
<Mateon1> Run in the directory $GOPATH/src/github.com/ipfs/go-ipfs
<Atomic_dNLFb> when I run the go test go-multihash checks out, the multihash and opts subfolder has no test files
<Atomic_dNLFb> (Run in the repo file)
<Atomic_dNLFb> (my edited repo file)
<Atomic_dNLFb> Mateon1
<Mateon1> I don't really know what you're trying to do, sorry
<Mateon1> By repo do you mean github.com/ipfs/go-ipfs/repo? What directory are you in?
<Kubuxu> Mateon1: he works on go-multihash
<drewolson> i have a question about running the ipfs daemon on multi-user systems. right now, it seems that only the user running the daemon can execute ipfs commands from the command line client. is that correct?
<drewolson> if so, is this planning on being changed in the future?
<drewolson> it seems like a big limitation
yoyogo has quit [Ping timeout: 240 seconds]
<Mateon1> No, any user can run ipfs commands from the command line, if they have read (not sure if write neccessary) access to $IPFS_PATH
<drewolson> Mateon1 i'm seeing different behavior on my server.
<drewolson> though i only have read access right now
<Mateon1> Have you set IPFS_PATH, or is it unset?
<achin> (note that $IPFS_PATH defaults to $HOME/.ipfs)
<drewolson> it is unset, ah, i didn't realize that was a thing
<Mateon1> If it's unset, ipfs assumes current user's $home/.ipfs
<drewolson> thanks, let me give that a shot
<drewolson> sweet, it works
<drewolson> thanks all.
<drewolson> is there somewhere i can read more about configuring the daemon?
<Mateon1> Well, you can try ipfs --help, ipfs daemon --help, ipfs init --help, etc. for starters
<Mateon1> There is also online documentation
mahloun has quit [Ping timeout: 245 seconds]
sirdancealot has quit [Ping timeout: 260 seconds]
ashark has joined #ipfs
sirdancealot has joined #ipfs
Foxcool has quit [Ping timeout: 272 seconds]
yoyogo has joined #ipfs
JayCarpenter has joined #ipfs
<drewolson> thanks.
sirdancealot has quit [Ping timeout: 240 seconds]
leeola has joined #ipfs
taaem has joined #ipfs
<Atomic_dNLFb> Kubuxu: could I pass the ball to you on the edit? I think I might need more time to touch up on my 30-hours-only Go skills.
<Atomic_dNLFb> *3-hours only
joelburget has joined #ipfs
yoyogo has quit [Ping timeout: 240 seconds]
yoyogo has joined #ipfs
<DokterBob> Hey folks, is it possible to (selectively) erase stuff from filestore?
discopatrick has quit []
<DokterBob> I have some deleted files that should not be referred to from filestore anymore.
discopatrick has joined #ipfs
<joelburget> DokterBob: `ipfs pin rm -r <hash>; ipfs repo gc`?
<DokterBob> Ok, so I could do some pipe magic and `ipfs filestore verify | <awk-magic> | ipfs pin rm <hash> | ipfs repo gc`
<DokterBob> Note: I'm specifically talking about the filestore, not the normal store.
john1 has joined #ipfs
<joelburget> Oh so you're looking for <awk-magic>?
joelburget has quit [Remote host closed the connection]
joelburget has joined #ipfs
Guest31939 has quit [Ping timeout: 264 seconds]
<DokterBob> No that part I can figure
<DokterBob> ;)
<DokterBob> Just I didn't know that repo garbage collection also worked for filestore
stoopkid has quit []
stoopkid has joined #ipfs
Foxcool has joined #ipfs
john1 has quit [Ping timeout: 240 seconds]
john1 has joined #ipfs
taaem has quit [Ping timeout: 240 seconds]
<lgierth> DokterBob: oh it does?
<lgierth> DokterBob: on an unrelated note, you're the one who built ipfs-search.com right?
<DokterBob> Yes. I expect my sponsored hosting to run out any moment.
<DokterBob> Making a backup of the index right now.
<DokterBob> On a more positive note, I actually might have a few days to put into it - at last
<lgierth> check out the search on tr.wikipedia-on-ipfs.org :)
<DokterBob> Hence contributions in the form of work or sponsorship (or adoption) are greatly appreciated
<lgierth> the search index is a data structure on IPFS
<DokterBob> lgierth: How's it work?
<lgierth> that means it's just as distributed as the data that it indexes
<lgierth> before adding all the files to ipfs, a primitive search index over the page titles is built
<lgierth> built in a way that makes it relatively efficient to query through normal ipfs paths
<DokterBob> Definitly the way forward.
<DokterBob> I was just discussing with a friend how it would be much preferable to have the metadata be added with the content.
<DokterBob> However, within the limited time I have (it's a hobby project), also building an efficient IPFS-based datastructure for storing the search index seems unfeasable.
<DokterBob> Take into account that ipfs-search is currently hitting elacticsearch' default 1024 fields limit
<DokterBob> And the index is about 200GB
<lgierth> yeah it's really non-trivial
<lgierth> also largely unchartered territory
<lgierth> any worthwhile info you stumble upon is greatly appreciated
<lgierth> the same idea can be applied to mapping and street routing too
<joelburget> lgierth: can i see this code?
Jim[m] has joined #ipfs
<joelburget> thx :)
<lgierth> oh mh i'm not sure where the code is that builds the index
jsgrant_om has quit [Quit: Peace Peeps. o/ If you need me asap, message me at msg(at)jsgrant.io & I'll try to get back to you within 24 hours.]
Jim[m] has left #ipfs ["User left"]
Caterpillar2 has joined #ipfs
droman has joined #ipfs
Caterpillar2 has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
chungy has quit [Ping timeout: 255 seconds]
yoyogo has quit [Ping timeout: 240 seconds]
cwahlers has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
yoyogo has joined #ipfs
emschwartz has quit []
emschwartz has joined #ipfs
ylp has quit [Quit: Leaving.]
reit has quit [Quit: Leaving]
<Kubuxu> You can take a look at recent demo here: http://ipfs.io/ipfs/QmVPkidhs5ZCnfSTA1A3SX5u9UP8CASHyzwGAVVfZ1CibL/
<Kubuxu> using simple english wikipedia
Foxcool has quit [Ping timeout: 246 seconds]
Foxcool has joined #ipfs
dimitarvp` has joined #ipfs
Jesin has quit [Quit: Leaving]
dimitarvp has quit [Ping timeout: 272 seconds]
dimitarvp` is now known as dimitarvp
<jnagro> woah
<jnagro> how does that work>
<jnagro> ah i see the code. thats pretty sweet.
jsgrant_om has joined #ipfs
galois_dmz has joined #ipfs
petroleum has joined #ipfs
imvr has quit [Remote host closed the connection]
jkilpatr has joined #ipfs
galois_dmz has quit [Ping timeout: 258 seconds]
galois_d_ has joined #ipfs
john1 has quit [Ping timeout: 246 seconds]
john1 has joined #ipfs
joelburget has quit [Ping timeout: 246 seconds]
john1 has quit [Ping timeout: 255 seconds]
ygrek has joined #ipfs
jkilpatr has quit [Ping timeout: 260 seconds]
petroleum has quit [Quit: petroleum]
ZarkBit has joined #ipfs
reit has joined #ipfs
reit has quit [Client Quit]
reit has joined #ipfs
robattila256 has quit [Ping timeout: 260 seconds]
cwahlers has joined #ipfs
kaotisk has quit [Read error: No route to host]
<drewolson> how much CPU / memory should i expect my node to consume? right now it's using more than any other process by a wide margin.
<r0kk3rz> works well, it would be nice to hijack someone elses indexer though
<DokterBob> drewolson: in production, I would never run it outside a container
<DokterBob> drewolson: basically, it'll eat anything it can get
<drewolson> DokterBob yikes, ok
<DokterBob> although on my MB right now it's running fine
<DokterBob> and in production i'm running a search crawler ;)
<drewolson> it's running "fine", but i've only pinned 1 static webpage and it's consuming 25% of my CPU
<drewolson> and 25% of my memory
<DokterBob> oh that's not good
<DokterBob> is your computer a raspberry pi?
<drewolson> well, it's more like 10% of my CPU and 25% of my memory
<drewolson> but still, that seems pretty bonkers
<drewolson> my ghost blog and my hosted irc client (lounge) both use far less
<r0kk3rz> do they use DHTs? :P
<drewolson> no, which is why i'm trying to understand what my baseline expectations should be :)
<drewolson> DokterBob this is on a digital ocean droplet with 1gig of ram
<DokterBob> Well at my mom's place I'm running IPFS on a Raspi and I am seeing that kind of load ;)
<jamesstanley> I tried to add a ~30M video to my ipfs node at work earlier and got errors about too many open files
<jamesstanley> I had to try 3 times to get it to complete
<jamesstanley> but just now, testing with a 300M file, it works fine
<jamesstanley> is this something that's been fixed in git but not in the binary I downloaded at work, or did I just get unlucky?
<jamesstanley> if the former, great; if the latter, I'll try and work out how to reproduce
traverseda has quit [Ping timeout: 260 seconds]
<DokterBob> ah maybe it's a thing with the release candidate
<DokterBob> latest i'm running is 0.4.8
ShalokShalom has joined #ipfs
ShalokShalom_ has quit [Ping timeout: 260 seconds]
john1 has joined #ipfs
Jesin has joined #ipfs
john1 has quit [Ping timeout: 240 seconds]
rendar has quit [Ping timeout: 260 seconds]
traverseda has joined #ipfs
jungly has quit [Remote host closed the connection]
john1 has joined #ipfs
ianopolous has joined #ipfs
skeuomorf has joined #ipfs
trojkat has joined #ipfs
<trojkat> Hi there!
<trojkat> What's going on with the tr.wikipedia snapshot?
<trojkat> "no link named "wiki" under Qme2sLfe9ZMdiuWsEtajWMDzx6B7VbjzpSC2VWhtB6GoB1"
jsgrant_om has quit [Quit: Peace Peeps. o/ If you need me asap, message me at msg(at)jsgrant.io & I'll try to get back to you within 24 hours.]
jsgrant_om has joined #ipfs
<trojkat> Ok, so I have enable sharding :-)
<trojkat> *have to
ZarkBit has quit [Quit: Going offline, see ya! (www.adiirc.com)]
rendar has joined #ipfs
<trojkat> or use 0.4.9?
traverseda has quit [Ping timeout: 260 seconds]
espadrine` has quit [Ping timeout: 245 seconds]
kaotisk has joined #ipfs
<trojkat> Ok, it's working on 0.4.9-rc2
Jesin has quit [Quit: Leaving]
Jesin has joined #ipfs
ZarkBit has joined #ipfs
traverseda has joined #ipfs
<trojkat> Thanks Kubuxu for that AUR package!
taaem has joined #ipfs
shizy has quit [Quit: WeeChat 1.7.1]
sirdancealot has joined #ipfs
shizy has joined #ipfs
espadrine has joined #ipfs
dimitarvp has quit [Quit: Bye]
sirdancealot has quit [Ping timeout: 240 seconds]
john1 has quit [Ping timeout: 240 seconds]
Atomic_dNLFb has quit [Quit: AtomicIRC: The nuclear option.]
john has joined #ipfs
john is now known as Guest96076
atrapado_ has joined #ipfs
ashark has quit [Ping timeout: 240 seconds]
trojkat has quit [Quit: Page closed]
jkilpatr has joined #ipfs
skeuomorf has left #ipfs ["Killed buffer"]
<Mateon1> drewolson: Sorry for the late response. IPFS uses a lot of memory, most likely because of the amount of connections it uses, a typical amount of RAM used ranges from 1 GB to 1.5 GB or so. It shouldn't use a lot of CPU while idle, though. Does it continue to use a lot of CPU after having a few minutes to 'cool down' after startup?
yoyogo has quit [Ping timeout: 240 seconds]
<drewolson> Mateon1: yes, it seems to continually use ~10% of my CPU
ashark has joined #ipfs
<Mateon1> Well, that's not what I would expect, hold on while I look up some debug commands
<drewolson> i've seen it both on the rc and the latest stable release
<Mateon1> drewolson: Okay, can you type these curl commands to collect debug info into a directory, then upload? https://github.com/ipfs/go-ipfs/blob/master/docs/debug-guide.md
<drewolson> Mateon1 i can do so later this evening
<Mateon1> Hm, what's that in hours? I'm in a different timezone than you are
<drewolson> probably not for 8 hours or so
<drewolson> or perhaps tomorrow
espadrine has quit [Ping timeout: 260 seconds]
<Mateon1> Ah, too bad, I won't be able to help then. You can do these steps and report that as an issue for go-ipfs on Github, we can always use more info to see what's slow
<drewolson> will do
<drewolson> thanks
Foxcool has quit [Ping timeout: 264 seconds]
yoyogo has joined #ipfs
Encrypt has joined #ipfs
M283750649[m] has joined #ipfs
Encrypt has quit [Quit: Quit]
Jesin has quit [Quit: Leaving]
jsgrant_om has quit [Quit: Peace Peeps. o/ If you need me asap, message me at msg(at)jsgrant.io & I'll try to get back to you within 24 hours.]
jsgrant_om has joined #ipfs
Mateon1 has quit [Remote host closed the connection]
Mateon1 has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
lemmi has quit [Remote host closed the connection]
shizy has quit [Ping timeout: 260 seconds]
lemmi has joined #ipfs
ljhms has quit [Ping timeout: 240 seconds]
atrapado_ has quit [Quit: Leaving]
chungy has joined #ipfs
ashark has quit [Ping timeout: 240 seconds]
cxl000 has quit [Quit: Leaving]
athan has quit [Remote host closed the connection]
mildred1 has joined #ipfs
mildred has quit [Read error: Connection reset by peer]
screensaver has quit [Ping timeout: 255 seconds]
neurrowcat has joined #ipfs
neurrowcat is now known as Neur
Neur is now known as Neur0
droman has quit []
ljhms has joined #ipfs
maxlath has quit [Quit: maxlath]
jaboja has joined #ipfs
leeola has quit [Quit: Connection closed for inactivity]
btmsn has quit [Quit: btmsn]
JayCarpenter_ has joined #ipfs
robattila256 has joined #ipfs
galois_d_ has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
galois_dmz has quit [Remote host closed the connection]
infinity0_ has joined #ipfs
infinity0_ has quit [Changing host]
infinity0 has joined #ipfs
infinity0 is now known as Guest12052
Guest12052 has quit [Killed (verne.freenode.net (Nickname regained by services))]
galois_dmz has joined #ipfs