<livegnik>
Thanks gatesvp & rschulman for your thougts. I appreciate it.
<livegnik>
Identifi is a peer-to-peer identity & reputation database. It's a bit of a broad topic to explain without spamming you folks, but I'm always happy to answer any questions.
<livegnik>
I (Tim Pastoor) am working on setting up 2WAY.IO, the first company building solutions based on Identifi (with a permissions layer add to it, effectively turning public nodes into private ones).
* livegnik
will leave it there before he starts ranting
<livegnik>
gatesvp: Could you elaborate a bit on the 'managing a key-chain within an IPFS node' you're trying to solve?
<gatesvp>
@livegnik: no rant necessary, you are among fellow believers here, what you are solving for needs to be solved :)
<livegnik>
Well, we're just organizing trust, not solving it ;)
<livegnik>
If you solve trust, you can probably sell your solution to the FBI Counter-Intelligence dept. or something.
<livegnik>
Until we know how to look into other people's head, organizing it is probably the best option. In that case, whitelisting is preferred over blacklisting, imho.
<gatesvp>
@livegnik: so when you start a node in IPFS it creates a public / private key pair used for IPNS... that's quasi-equivalent to DNS... it basically means that I can publish an IPFS hash as mutable content
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<livegnik>
That's sweet, because if I can use that same keypair for my node, problem solved. ;)
<gatesvp>
this means that you can effectively "publish" from any node that you own with a consistent hash, but without anything like domain ownership
<gatesvp>
Right... except we don't have any way to "share" keys right now
<livegnik>
That's where Identifi comes in. Too bad identi.fi is down, otherwise I could have explained the proto-type demo front-end a bit to you.
<livegnik>
Demo effect ... *sigh*
<livegnik>
I've already pinged Sirius, so I'm waiting for it to come back up any day now ... See my problem? ;)
<gatesvp>
Which is really want you want, because you're more worried about "existence of key" than you are about "existence of specific server"... but that "import a key" feature is going to happen and is already in discussion
<livegnik>
I'd have to read into IPFS in order to be able to say anything useful, but I think there could be a way for Identifi to help with this.
therealplato has joined #ipfs
<gatesvp>
@livegnik: once you're able to import your key into a given IPFS node, then you basically have some form of identity and distributed... trust can be built on top of that in whatever way you deem fit :)
eyebloom has quit [Quit: eyebloom]
<gatesvp>
"some form of identity that _is_ distributed"
<livegnik>
Conceptually it sounds doable. :)
<livegnik>
Thing is that I'm more of a conceptual guy than a coder. I was a M$ sysadmin for 12 years and quit my job to get a team and work on this, approximately 10 months ago.
<livegnik>
We're starting to get somewhere now, and will be publishing a series of articles over the next few months or so. There isn't a lot of documentation regarding (the proof-of-concept of) Identifi, so hopefully that helps for others to understand / contribute.
<livegnik>
Also, we're working on a forum and a wiki, which should be a good start, next to the GitHub.
<livegnik>
Sirius is the guy behind the OSS project btw, I'm just a stoked fella who's trippin' on his software.
<livegnik>
Apart from that, I'm also stoked about IPFS, from what I've seen thus far. Therefore. :)
gendale_ has quit [Remote host closed the connection]
inconshreveable has quit [Read error: Connection reset by peer]
inconshreveable has joined #ipfs
<gatesvp>
@livegnik: my two cents on this one is that you're basically building a trust network around "people", but it's not really clear what counts as "people". Like are people going to be connecting their Facebook accounts? Blockchain is kind of a useful version of "distributed people" because everyone has a Public/Private key pair (wallet), so they have some form of ID.
<gatesvp>
Keybase.io also has some form of distributing keys
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
gatesvp has quit [Quit: Page closed]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
gendale_ has joined #ipfs
rollick has quit [Ping timeout: 264 seconds]
<livegnik>
Sorry for the interruption gatesvp, I'll get back to you in a minute.
M-staplemac has quit [Ping timeout: 240 seconds]
M-staplemac has joined #ipfs
voxelot has joined #ipfs
rollick has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has quit [Remote host closed the connection]
hleath has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
M-mistake has quit [Quit: node-irc says goodbye]
M-mistake has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
inconshreveable has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
voxelot has quit [Ping timeout: 264 seconds]
voxelot has joined #ipfs
inconshreveable has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
domanic_ has quit [Ping timeout: 272 seconds]
voxelot has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
M-davidar: there, now its safe and secure :)
hleath has quit [Quit: hleath]
<M-davidar>
whyrusleeping: :)
<M-davidar>
I got a response back from CiteSeerX, their dataset is 3.5-4TB (compressed)
<M-davidar>
anyone happen to have that much space going to waste? :)
<whyrusleeping>
i have that much space, but the bandwidth that box is behind wouldnt dream of downloading that
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<M-davidar>
what's the roadmap for the cluster of nodes jbenet mentioned the other day?
<whyrusleeping>
hrm, which cluster?
<M-davidar>
"this is a much requested feature-- we're specing out how "clustering" ipfs nodes will work, so that they gather together their available disk space and distribute pieces."
Eudaimonstro has joined #ipfs
<whyrusleeping>
ah, thats still a few months out sadly
<whyrusleeping>
i really want to be able to do that, but it depends on a lot of other code being done first
<whyrusleeping>
notably, the keystore for sharing keys and designating trusted nodes
<M-davidar>
i wonder if citeseerx would consider running their own ipfs node?
<whyrusleeping>
that would be pretty cool :)
<M-davidar>
no idea how i'd convince them to do that though :)
<whyrusleeping>
yeah, with that big of a dataset, it might be hard to convince them to change their workflow
eyebloom has joined #ipfs
<M-davidar>
hmm, looking at their code, it seems as if the pdfs (which is the bulk of the data) are just stored as files on the disk
<M-davidar>
would it be possible for ipfs to serve that to the network without using too much extra resources?
pfraze has quit [Remote host closed the connection]
Eudaimonstro has quit [Ping timeout: 265 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<M-davidar>
e.g. is it possible for ipfs to only store metadata, without duplicating file contents into its own repo?
<whyrusleeping>
M-davidar: currently no, but we're speccing out a way of indexing files without duplicating
<whyrusleeping>
its hard to do right, and even if done right, will be very fragily
<whyrusleeping>
if any part of the file is changed, the entire thing becomes invalid
<M-davidar>
but no less fragile if its an archive that isn't supposed to have changes?
<M-davidar>
also, is there a limit on block sizes?
<M-davidar>
i.e. can you just serve up an entire file as a block, or would that cause problems?
<whyrusleeping>
there is a limit on block sizes, we dont allow any blocks larger than 1MB, for network performance reasons
Eudaimonstro has joined #ipfs
<whyrusleeping>
if its an archive, then yeah, it should be fine
<whyrusleeping>
it just makes me a bit nervous, lol
<M-davidar>
so, could you just split the file into 1MB chunks, and serve them as raw blocks
<M-davidar>
and have a master block that assembles them all, without having to put metadata in the blocks themselves
<whyrusleeping>
M-davidar: eh, it would be easiest to just use our existing chunking algorithms, but instead of writing blocks, write pointers
<M-davidar>
so, ipfs would dereference the pointer before transferring the block, or it would point to a block that contains a raw chunk of data?
<whyrusleeping>
it would point to a block containing raw data
<M-davidar>
(which is then mapped to a section of a file)
<whyrusleeping>
the pointer would be something like {path="/home/dude/files/a",offset=1024,length=4096}
<M-davidar>
so you'd just need a local map of hashes to path+offset+len?
<whyrusleeping>
yeap, pretty much
<M-davidar>
would that be hard to implement?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
implementing that functionality on its own, pretty easy
<whyrusleeping>
integrating it seamlessly into what we already have? thats the hard part
notduncansmith has quit [Read error: Connection reset by peer]
joshbuddy has quit [Ping timeout: 246 seconds]
joshbuddy has joined #ipfs
sharky has quit [Ping timeout: 246 seconds]
joshbuddy_ has joined #ipfs
joshbuddy has quit [Ping timeout: 246 seconds]
joshbuddy_ is now known as joshbuddy
therealplato has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
sharky has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has quit [Ping timeout: 260 seconds]
joshbuddy has quit [Ping timeout: 260 seconds]
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
luca has quit [Ping timeout: 264 seconds]
luca has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
xelra has quit [Ping timeout: 252 seconds]
gunn has quit [Ping timeout: 240 seconds]
<gendale_>
so i have a dumb question for you guys
<gendale_>
i'm trying to perform "ipfs ls $hash" on a list of files that I have
<gendale_>
I have ipfs aliased to /home/gendale/go/bin/ipfs
<gendale_>
so ipfs ls $hash works with just a random hash
<gendale_>
so i run ~/go/src/alexandria/dataparse]$ for i in `cat magnets.txt`; do "ipfs ls $i";done
<gendale_>
but I just get a whole bunch of "command not found"
<gendale_>
this is probably a rookie bash scripting question, so my apologies if this is the wrong place
<gendale_>
it seems like the alias that I have set isn't carrying over to my bash script
<whyrusleeping>
gendale_: interesting... what does magnets.txt contain?
<gendale_>
a list of magnets
<whyrusleeping>
are they ipfs hashes?
<gendale_>
yes
<gendale_>
example output from file:
<gendale_>
bash: ipfs ls QmU8gm3Ffpt9jcpLEg4sGV7E18TR22Hjg2c5ztsebpDXTy: command not found
<gendale_>
bash: ipfs ls QmRA3NWM82ZGynMbYzAgYTSXCVM14Wx1RZ8fKP42G6gjgj: command not found
<gendale_>
bash: ipfs ls QmY4KXsCA1DEix74emacdCDR8TQHd7wieFLR47RQMqWpec: command not found
<whyrusleeping>
try using the absolute path to the binary?
<gendale_>
sorry, output from command
<gendale_>
yes
inconshreveable has quit [Read error: Connection reset by peer]
<whyrusleeping>
'/home/gendale/go/bin/ipfs ls $i'
<gendale_>
for i in `cat magnets.txt`; do "/home/gendale/go/bin/ipfs ls $i";done
<gendale_>
bash: /home/roerick/go/bin/ipfs ls QmU8gm3Ffpt9jcpLEg4sGV7E18TR22Hjg2c5ztsebpDXTy: No such file or directory
inconshreveable has joined #ipfs
<gendale_>
is the result, one output for each hash in the file
<gendale_>
gendale = roerick here, ignore the change in home dir
<whyrusleeping>
huh.. can you run any of those commands outside of the loop?
okket has quit [Ping timeout: 250 seconds]
<gendale_>
but simply running the command /home/roerick/go/bin/ipfs ls QmU8gm3Ffpt9jcpLEg4sGV7E18TR22Hjg2c5ztsebpDXTy
<gendale_>
seems to work fine
xelra has joined #ipfs
<whyrusleeping>
huh.
<whyrusleeping>
let me try some things...
<whyrusleeping>
ah
dawuud has quit [Ping timeout: 272 seconds]
<whyrusleeping>
okay
<whyrusleeping>
take the quotes off
<whyrusleeping>
from around ipfs ls $i
dawuud has joined #ipfs
<whyrusleeping>
its looking for a binary called 'ipfs ls $i'
gendale_ has quit [Read error: Connection reset by peer]
gendale_ has joined #ipfs
ryankarason has quit [Ping timeout: 272 seconds]
ryankarason has joined #ipfs
<whyrusleeping>
gendale_: o/
okket has joined #ipfs
Quiark has quit [Ping timeout: 246 seconds]
<gendale_>
i get: for i in $HASH... Qmeke1CyonqgKErvGhE18WLBuhrLaScbpSAS6vGLuoSCXM; do ipfs: File name too long
notduncansmith has joined #ipfs
<gendale_>
where $HASH is the contents of the file containing all the hashes
<gendale_>
that's when i run bash -o
notduncansmith has quit [Ping timeout: 250 seconds]
inconshreveable has quit [Ping timeout: 246 seconds]
* whyrusleeping
head hurts
<gendale_>
ah ok, i think i figured it out
<gendale_>
i think the quotes were making it run as one long string?
<whyrusleeping>
oooh, yeah
Tv` has quit [Quit: Connection closed for inactivity]
<M-davidar>
whyrusleeping: re #875, would updating the datastore Put interface to include (optional) metadata like path,offset,etc (and then make a new datastore impl that actually uses that information) be a possible solution?
<whyrusleeping>
M-davidar: nope, the datastore isnt where we would want to do it
<whyrusleeping>
it would be best to have it as a custom blockstore
<whyrusleeping>
or, instead of a modifying the datastores interface
<whyrusleeping>
modify its implementation to map <blockhash> -> {path,offset,len} and then automatically resolve the pointer
<M-davidar>
could you remind me of the difference between datastore and blockstore?
<whyrusleeping>
so the data you put to the blockstore would just be {path,offset,len}
<whyrusleeping>
but gets would return file data
<whyrusleeping>
blockstore puts and gets block objects, its implementation only cares about blocks
<whyrusleeping>
and file chunks are blocks
joshbuddy has joined #ipfs
<M-davidar>
oh, i see, it just wraps datastore
<whyrusleeping>
yeap!
<whyrusleeping>
it *could* wrap a datastore and interpret the information differently
<whyrusleeping>
so instead of interpreting the information as block data, it would interpret it as a pointer to be resolved
<whyrusleeping>
yeah, i think it makes sense to have a custom blockstore for this
<M-davidar>
so, are you saying that the block objects that the blockstore receives (could) have {path,offset,len}
<whyrusleeping>
so 'put' on this blockstore would accept a '*Block' object to satisfy the interface, but the actual data in that block would be the pointer tuple
<M-davidar>
are there multiple blockstores already, or would that be the main difficulty?
<whyrusleeping>
we have one blockstore wrapping another
<whyrusleeping>
its pretty composable so that wouldnt be to big of an issue
<whyrusleeping>
the hard part would be the process from 'ipfs add...' to the disk
* whyrusleeping
needs to go to sleep
<whyrusleeping>
i'll leave you to ponder that, feel free the comment on the issue
<M-davidar>
ok, thanks
<M-davidar>
night
joshbuddy has quit [Quit: joshbuddy]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Quiark has joined #ipfs
<gendale_>
what is the preferred method for unpinning all content from ipfs daemon?
<M-davidar>
gendale_: i guess you could use ipfs pin rm -r if you only have a few top-level directories pinned
<M-davidar>
i.e. for all the hashes in ipfs pin ls -t recursive and ipfs pin ls -t direct
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<gendale_>
cool, that worked fine
<gendale_>
only pinned magnets are broadcasted to DHT, correct?
inconshreveable has joined #ipfs
therealplato has quit [Ping timeout: 245 seconds]
therealplato has joined #ipfs
<M-davidar>
if you run ipfs repo gc then it removes all unpinned objects
<M-davidar>
otherwise i think everythin in your cache is also available to the network
therealplato has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
therealplato has quit [Ping timeout: 240 seconds]
inconshr_ has joined #ipfs
notduncansmith has joined #ipfs
inconshreveable has quit [Ping timeout: 244 seconds]
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
therealplato has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Eudaimonstro has quit [Remote host closed the connection]
therealplato has joined #ipfs
therealplato has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
thelinuxkid has joined #ipfs
therealplato has joined #ipfs
Encrypt has joined #ipfs
thelinuxkid has quit [Quit: Leaving.]
therealplato has quit [Ping timeout: 260 seconds]
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt has quit [Quit: Quitte]
therealplato has quit [Ping timeout: 256 seconds]
ZioFork has joined #ipfs
domanic_ has joined #ipfs
Quiark_ has joined #ipfs
therealplato has joined #ipfs
Quiark has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Quiark_ has quit [Ping timeout: 252 seconds]
therealplato has quit [Ping timeout: 272 seconds]
<cryptix>
jbenet: nearly cracked git-remote-ipfs - also dont need the lame checkout of the full bare repo to clone :)
dignifiedquire has joined #ipfs
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has quit [Ping timeout: 240 seconds]
ZioF0rk has joined #ipfs
ZioFork has quit [Ping timeout: 252 seconds]
<jbenet>
cryptix: nice!
therealplato has joined #ipfs
notduncansmith has joined #ipfs
therealplato has quit [Ping timeout: 260 seconds]
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
mkarrer has joined #ipfs
therealplato has quit [Ping timeout: 244 seconds]
dignifiedquire has quit [Quit: dignifiedquire]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
therealplato has quit [Ping timeout: 256 seconds]
Eudaimonstro has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
Encrypt has joined #ipfs
therealplato has quit [Ping timeout: 272 seconds]
* Luzifer
morning #ipfs!
<Luzifer>
(stupid shortcut for /me)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
* jbenet
is glad to not be the only one
<Luzifer>
:D
<M-davidar>
jbenet: I'm contemplating trying to convince citeseerx to run an ipfs node (no idea if they'd go for it, but they've been responsive so far), but I think #875 would be a major blocker
<jbenet>
M-davidar: could you include me in the conversation pls? (juan@benet.ai)
<jbenet>
or juan@ipfs.io
<jbenet>
M-davidar: one reason not to go for #875 is that -- with rabin chunking -- we can potentially drastically reduce their storage size
therealplato has quit [Ping timeout: 246 seconds]
<daviddias>
whyrusleeping: around ?
<M-davidar>
jbenet: cool, would you be able to help with spruiking the benefits of ipfs? :)
<jbenet>
M-davidar: yep, for sure
<jbenet>
whyrusleeping save us from symlink hell
<jbenet>
whyrusleeping \o/
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mentos1386 has joined #ipfs
<M-davidar>
jbenet: are there any other potential benefits I could bring up? (reduced storage, reduced bandwidth, increased redundancy, ...)
<jbenet>
i cant believe checking a file size from shell portably is so hard
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
<cryptix>
jbenet whyrusleeping: have you tried wrapping shell in http.FileSystem? would be a much nicer vfs than shell itself (and it would clean up most of the gateway handler too) https://godoc.org/net/http#FileSystem
inconshr_ has quit [Remote host closed the connection]
therealplato has quit [Ping timeout: 244 seconds]
<jbenet>
cryptix: nice! -- what would clean up in gateway handler? (i'm all for that!)
<cryptix>
id like to port our new index page to take such a http.FileServer, than you can also use it for localfs http.Dir(./) or other virtual file systems
notduncansmith has quit [Read error: Connection reset by peer]
voxelot has quit [Ping timeout: 265 seconds]
<jbenet>
substack: using comandante, i see cases where if the parent is killed, children linger. comandante is super small, so is this a child_process.spawn thing?
<jbenet>
substack: children should be killed according to the unix process tree abstraction, but looks like something is screwing up here-- at least in osx.
<jbenet>
ogd: you may have run into this as well o/
<ogd>
jbenet: not sure but check out npm i tree-kill
<jbenet>
ogd: the tricky bit is that the if the parent is kill -9 <id>'d the parent cant do anything more, but the children should be killed too. there must be something being set on the exec/spawn calls to prevent typical kills from propagating
<jbenet>
one thing we can do is spawn things and give them a parent, if the parent disappears, they die too.
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Luzifer>
slackbot? interesting… you set up an irc2slack transfer?
notduncansmith has quit [Read error: Connection reset by peer]
mentos1386 has quit [Ping timeout: 260 seconds]
mentos1386 has joined #ipfs
<jbenet>
Luzifer: we didnt, others did
<Luzifer>
nkay
mildred has quit [Ping timeout: 240 seconds]
pfraze has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Eudaimonstro has quit [Ping timeout: 244 seconds]
<mentos1386>
how can you use db(sql type or any ipfs alternative) for a website on ipfs? can the db be also decentralize? or would you have to use normal db, and than all decentralized websites connect to it, but that kinda defeats the purpose of decentralization.
mildred has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix>
mentos1386: to truly use ipfs as a datastore you need to rethink your datastructures. blockchains are one option or crdts if you want to go full baserk :)
<cryptix>
hyperlog is an node example of a general blockchain
<cryptix>
or scuttlebutt is also a noteworthy example
<cryptix>
or you just serialize your db to sql and store that but then you have to come up with a most-recent consensus
<mentos1386>
yeah, it's really different from current web :) i'll check the blockchains, are there any "non-static" websites on ipfs?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dignifiedquire has quit [Quit: dignifiedquire]
mentos1386- has joined #ipfs
mentos1386 has quit [Ping timeout: 245 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
voxelot has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
eyebloom has quit [Quit: eyebloom]
domanic_ has joined #ipfs
eyebloom has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato1 has joined #ipfs
therealplato has quit [Read error: Connection reset by peer]
okket_ has joined #ipfs
therealplato has joined #ipfs
therealplato1 has quit [Ping timeout: 246 seconds]
okket has quit [Excess Flood]
<Bat`O>
guys, i've a small trouble with ipfs
<Bat`O>
$ ipfs get QmUYndb1SkY49khYAg9Zn2yp9Z44cRFtqMnVAZP5qes8ce
<Bat`O>
$ ipfs refs local | wc -l
<Bat`O>
181
<Bat`O>
$ ipfs refs local | grep -i QmUYndb1SkY49khYAg9Zn2yp9Z44cRFtqMnVAZP5qes8ce
<Bat`O>
return nothing
<Bat`O>
it should be in the local refs, right ?
<Bat`O>
(this is a file part of the webui)
<Bat`O>
(and it's correctly written in my filesystem)
<Bat`O>
reproduced on different machine
<jbenet>
mentos1386: ipfs can work as a kv-store, and sqlite is SQL on top of any kv-store, so you _can_ get SQL on ipfs, we just havent gotten close to that, and it wont be super fast unless you do some smart pre-fetching of stuff
<jbenet>
Bat`O that's right. i think it may be a bug.
<jbenet>
can you reproduce it reliably? like try going to another terminal and do: export IPFS_PATH=/tmp/ipfs-node && ipfs init && ipfs get QmUYndb1SkY49khYAg9Zn2yp9Z44cRFtqMnVAZP5qes8ce && ipfs refs local | grep -i QmUYndb1SkY49khYAg9Zn2yp9Z44cRFtqMnVAZP5qes8ce
<jbenet>
may need to start the daemon too
<jbenet>
Bat`O yeah i can repoduce it.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Bat`O>
i'll open a issue
<jbenet>
Bat`O thanks
domanic_ has quit [Ping timeout: 244 seconds]
<jbenet>
whyrusleeping: do you have time to chat in an hr?
<jbenet>
(or later
<voxelot>
implementing sqlite like feature for ipfs would mean you have figured out some sort of indexing and searching right? two things i am very interested in :)
thelinuxkid has joined #ipfs
<jbenet>
voxelot: sqlite already has one-- not tuned for ipfs, but it works over a filesystem.
<jbenet>
voxelot so you _can_ use it
<jbenet>
(a) getting something that works, and (b) making it fast, are two separate, usually sequential steps :)
<voxelot>
hey you just keep up the good work :)
<voxelot>
meanwhile i'm babystepping my way into ipfs with go tutorials, very C like
* whyrusleeping
yawns and rolls out of bed
<voxelot>
any suggestions as far as maybe an entry point to the go code for ipfs on github, sorta just daunted by all the code =/
<voxelot>
morning whyrusleeping
<whyrusleeping>
voxelot: gmornin!
andrei_s has joined #ipfs
<whyrusleeping>
voxelot: question, what do you want to work on? what is your goal?
<whyrusleeping>
i've always found the goal of 'learn the codebase' to be a bit much
<voxelot>
end goal is a social platform, so web apps with decentralized database
<voxelot>
so i really want to build out the node api
<voxelot>
get some privacy with cjdns
<voxelot>
and searching of course
<whyrusleeping>
okay, the node api? or the node implementation of the protocol?
<voxelot>
implementation
<voxelot>
building my whole site in node
<whyrusleeping>
okay, you'll definitely want to chat with daviddias about that then
<whyrusleeping>
he's working on that and will likely be able to point you in a better direction than I
<daviddias>
Hey :)
thelinuxkid has quit [Quit: Leaving.]
<voxelot>
okay thanks, and also i'd like to ultimatly help get "node" privacy on ipfs so we canhave users choose who sees their data
<daviddias>
voxelot: if I understand, you are looking to build a Node.js app that 'contacts' the IPFS API, right?
<voxelot>
yessir!
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<voxelot>
i've been working with js and the api already
<daviddias>
whyrusleeping: was mentioning that we are building a full Node.js implementation of IPFS, compliant with the spec. But until we have that, you can use the ipfs-api module (http://npmjs.org/ipfs-api), it's a wrapper on top of the HTTP API go-ipfs exposes
<daviddias>
nice! Have you used the ipfs-api module yet?
<voxelot>
yuo i have, really cool stuff, adding has been a bit of trouble but i know whyrusleeping put some work into that this week
<voxelot>
still haven't made it work myself lol, and get 403 responses from ipfs when i cat and ls
<voxelot>
but i'm sure i'm just doing something wrong with the api, so i wanted to dive deeper and see how ipfs was working in go
<daviddias>
interesting, are you pointing to your local IPFS node?
<daviddias>
You can try with "*" , not a best practice, but if it is CORS giving you a 403, it solves the problem
<daviddias>
But only if you are requesting from the browser
<daviddias>
If you are using the ipfs-api from Node, then it must be another thing
<voxelot>
well i'm not sure it is, i am using it from node, and i had the cors error message saying i didnt have access, so i allowed it and that message went away
<voxelot>
and my console started return 403s when i tried getting my files
<voxelot>
let me see if i can reproduce it
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
whyrusleeping we should make benchmarks docs and throw TONS of data at it
<jbenet>
whyrusleeping could be a great blog post, too.
<jbenet>
also what sorts of data it's good for vs other things
<jbenet>
(we can try vm images, linux isos, videos, jpegs, documents, etc)
<whyrusleeping>
i was gonna work on an ipfs perf program
<whyrusleeping>
that would benchmark the performance of a bunch of different ipfs things and output a large table of the data
<whyrusleeping>
so we could run it on every commit
<jbenet>
that would be very sweet
<whyrusleeping>
(and it would also run some benchmarks of the machine its being run on to make comparison of numbers easier)
<jbenet>
chunkers useful beyond ipfs though, other people could be incented to write betetr chunking algos if there was a good/easy way to benchmark them against lots of workloads
<whyrusleeping>
yeah
andrei_s has quit [Remote host closed the connection]
inconshreveable has joined #ipfs
eyebloom has quit [Quit: eyebloom]
<rschulman>
whyrusleeping: I’m still getting the no such rootfs error when I try to start up daemon --mount
eyebloom has joined #ipfs
<rschulman>
was that fixed or should I do it differently?
eyebloom has quit [Client Quit]
<whyrusleeping>
rschulman: this on the mfs branch?
<rschulman>
hm
<rschulman>
I just did a go get
<rschulman>
not sure which branch that pulls
<rschulman>
but my local branch is set to mfs yeah
<whyrusleeping>
one sec
<konubinix>
Hi all
eyebloom has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to feat/real-rabin: http://git.io/v3JOq
<ipfsbot>
go-ipfs/feat/real-rabin 52aacb6 Jeromy: move chunker type parsing into its own file in chunk...
<rschulman>
there we go
<konubinix>
I am trying to debug some ipfs command
<rschulman>
just switched my local branch to master and its working, sorry
<konubinix>
Any instruction on how to start ?
<whyrusleeping>
konubinix: what command? and whats wrong?
<jbenet>
krl are you around?
notduncansmith has joined #ipfs
<konubinix>
whyrusleeping: I would like to debug the issue 1554 and while doing that getting a first feeling about the code
notduncansmith has quit [Read error: Connection reset by peer]
<konubinix>
whyrusleeping: Thus the command is ipfs refs local
<whyrusleeping>
konubinix: ah, okay. So can you reproduce the error in the issue?
<konubinix>
Yes
<whyrusleeping>
okay, so the code youre going to want to look at:
<konubinix>
I am not fluent in go environment, so I don't know what to do to start debugging the code
<whyrusleeping>
its logged as debug, so nobody would see it unless they turn all the logs on
<whyrusleeping>
rschulman: hrm... dangit
<rschulman>
yeah
<whyrusleeping>
what version of OSX and OSXfuse?
<konubinix>
whyrusleeping: Just to make sure I am starting things well. I run "make build" then "gdb cmd/ipfs/ipfs" and I can issue the command "run refs local"
<rschulman>
even more dangit since you can’t replicate
<whyrusleeping>
konubinix: that should do it, as long as you arent running a daemon
<konubinix>
It looks like I can debug that way, is there any recommendation you could give me ?
<rschulman>
OSX: 10.10.4
<konubinix>
whyrusleeping: Ok. Thank you again for your help :-)
<rschulman>
fuseversion: 27
<whyrusleeping>
konubinix: i havent actually used gdb since working on the freeBSD kernel, and thats quite a bit different than this
eyebloom has quit [Quit: eyebloom]
<whyrusleeping>
so i'm not sure what advice to give on that end
<whyrusleeping>
rschulman: can you log that in the issue?
<rschulman>
absolutely
<whyrusleeping>
(so i dont have to ask again when i switch to osx later)
<jbenet>
(will send out later tonight cc daviddias)
<daviddias>
was onto electron app stuff
<daviddias>
but I can shift
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rschulman>
whyrusleeping: Done
<rschulman>
sorry I can’t help more. I don’t know what the issue is on my end.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<voxelot>
@daviddias, ok so i'm still getting 403's back, not sure what distinction you are making between browser and ipfs, when i mention console i mean the node cosole or js console
<voxelot>
can ignore the form, just for future user input into ipfs
<voxelot>
i got past CORS, and i'm checking console logs in chrome console
<voxelot>
for the response
<daviddias>
so, first confusion is the pipe(process.stdout)
<daviddias>
process.stdout doesn't exist in a browser context
<voxelot>
shouldn't that output to the console though?
<daviddias>
neither `pipe`
<voxelot>
okay, my js is really noob haha
<voxelot>
so how would i pull a json object off ipfs
<voxelot>
store it in my javascript, really all i'm trying to do
<voxelot>
going to put a simple .json on my ipfs node now
<daviddias>
checking that for you ;) Making sure of one thing, get back to you in a sec
<voxelot>
you're the best
<daviddias>
Well, I managed to get the file you are serving
<daviddias>
with the same code
<daviddias>
you see, in the example you are running, the if condition checks to see if `res` is readable, that is basically to check if we are in Node.js land or Browser land
<daviddias>
in browser land, the res is returned as a string
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
<voxelot>
haha :)
<voxelot>
ipfs version 0.3.5
<voxelot>
update maybe?
<voxelot>
and so the res sent you back a string of garbage since it's an image right
<daviddias>
yep, update and try again
<voxelot>
how does one update from the go get
<voxelot>
just tried pulling the go get -u github.com/ipfs/go-ipfs/cmd/ipfs
<voxelot>
same version number still
<daviddias>
interesting, that should work
<daviddias>
whyrusleeping ping ^ any tips
<whyrusleeping>
uhm
<whyrusleeping>
cd $GOPATH/src/github.com/ipfs/go-ipfs && git pull origin master && go install ./cmd/ipfs
<daviddias>
thank you
<whyrusleeping>
if that doesnt work, then you dont have your GOPATH and PATH set correctly
<daviddias>
voxelot can you try that ^
<voxelot>
already up to date
<voxelot>
but number changed
<voxelot>
thanks!
<whyrusleeping>
ah, go get might not actually run the intstall
<voxelot>
it worked no more 403!
<voxelot>
thanks guys
<whyrusleeping>
wooo!
<daviddias>
woot! :D
<mentos1386>
can you edit files once they are published using ipfs add ?
<voxelot>
^ interested in a way as well if that's possible
<whyrusleeping>
mentos1386: not exactly
<whyrusleeping>
ipfs is an immutable filesystem
<whyrusleeping>
if you change a file, it becomes a different file
<whyrusleeping>
since files are referenced by their hashes
<mentos1386>
so, if you want to update something, you have to give users new hashes?
<voxelot>
garbage collection will be good enough to sweep up old hashes if i have a web app that edits data on the front end and sends back new json objects at user request?
<whyrusleeping>
voxelot: yes
<voxelot>
cool
<whyrusleeping>
mentos1386: only if youre giving them /ipfs/... hashes
<whyrusleeping>
we have a naming system that alleviates that
<voxelot>
ipns right
<whyrusleeping>
ipns allows you to have a mutable pointer to immutable content
<whyrusleeping>
that feature is still 'not finished', it works in a very basic form right now
<whyrusleeping>
but its going to be wayyyyyyy better in the coming weeks
<mentos1386>
if i get it right, ipns is like dns, but insted of ip it uses hashes?
<whyrusleeping>
yeap!
<whyrusleeping>
hashes and public key cryptography
<mentos1386>
but, if i change file(upload new one), i have to give ipns new hash?
notduncansmith has joined #ipfs
<whyrusleeping>
nope, same hash
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
you just update the ipns record with your changed content
<whyrusleeping>
just like updating a dns record to point to a new ip
<mentos1386>
but how dose it know which hash is new file?
<whyrusleeping>
you run something like 'ipfs name publish <new file hash>'
zabirauf has quit [Max SendQ exceeded]
<mentos1386>
aha ok great, thanks
<whyrusleeping>
mentos1386: again, its gonna be easier to work with all this in the near future, but for now, the basic ideas are there
<mentos1386>
no problem, i just found this out today, it looks awesome as is.
notduncansmith has quit [Read error: Connection reset by peer]
<voxelot>
so when i pull from ipfs with the api, the pipe isnt readable for the broswer so it hops over to outputing as string, any thoughts on how i could parse this data into json object?
<voxelot>
or maybe build a json return type into the api in the future?>
<whyrusleeping>
voxelot: which api calls?
<voxelot>
ipfs.cat
<voxelot>
sends over data in the response
<whyrusleeping>
okay
<whyrusleeping>
and youre expecting that data to be json?
<whyrusleeping>
like, the file youre catting is json?