<whyrusleeping>
it feels like the quality of your scans is getting better
<whyrusleeping>
did you get new hardware?
<alu>
no just practice, looking forward to better hardware tho
sahib has quit [Ping timeout: 248 seconds]
<whyrusleeping>
noice, i'm looking forward to realtime 3d environment streaming :)
sahib1 has quit [Ping timeout: 276 seconds]
s_kunk has quit [Read error: Connection reset by peer]
s_kunk has joined #ipfs
voxelot has joined #ipfs
hagarth has quit [Ping timeout: 268 seconds]
hagarth has joined #ipfs
JZA has joined #ipfs
silotis has quit [Remote host closed the connection]
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
jaboja has quit [Ping timeout: 246 seconds]
<VictorBjelkholm>
is there any way I can have a distributed index that everyone can "commit" to (add content) without centralizing it? I'm having trouble figuring anything out that might help with this, always I end up with one central component (the one receiving data and adding it to the index)
<deltab>
Arakela007: it might be possible to use DAV with an HTTP gateway
mildred has joined #ipfs
ruby32 has quit [Remote host closed the connection]
ruby32 has joined #ipfs
mildred has quit [Ping timeout: 240 seconds]
nicolagreco_ has joined #ipfs
nicolagreco has quit [Ping timeout: 276 seconds]
nicolagreco_ is now known as nicolagreco
conway has quit [Ping timeout: 268 seconds]
zorglub27 has joined #ipfs
pfraze has quit [Remote host closed the connection]
ruby32 has quit [Remote host closed the connection]
ruby32 has joined #ipfs
conway has joined #ipfs
pfraze has joined #ipfs
Boomerang has joined #ipfs
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid pushed 2 new commits to master: https://git.io/vV8Co
<ipfsbot>
js-ipfs/master 27c38c4 David Dias: Merge pull request #97 from noffle/no-force...
<ipfsbot>
js-ipfs/master 014d618 Stephen Whitmore: removes --force
tidux has left #ipfs [#ipfs]
Boomerang has quit [Ping timeout: 260 seconds]
_rht has quit [Quit: Connection closed for inactivity]
<apiarian>
can multiple nodes share a /ipfs mountpoint?
reit has quit [Quit: Leaving]
<lgierth>
no
<lgierth>
you can mount to a different preifx though
<lgierth>
e.g. /home/apiarian/ipfs
<apiarian>
well i was thinking more like having multiple ipfs nodes collocated on a server somewhere
<lgierth>
that you can do
<lgierth>
if you don't want them to mount /ipfs
zorglub27 has quit [Ping timeout: 244 seconds]
<apiarian>
right, but if node1 has loaded /ipfs/QmCOOLTHING to its storage, and node2 on the same machine wants /ipfs/QmCOOLTHING, then they might more effectively share the same disk locations than each keeping its own copy. maybe security issues with this, though?
<lgierth>
they can't share the same repo at the moment
<lgierth>
they might in the future
sahib has quit [Ping timeout: 246 seconds]
zorglub27 has joined #ipfs
Looking has joined #ipfs
tmg has quit [Ping timeout: 268 seconds]
chriscool has quit [Ping timeout: 250 seconds]
zorglub27 has quit [Quit: zorglub27]
<VictorBjelkholm>
whyrusleeping, thanks, thought about blockchain and blockchain-like technologies but would require people to "pay" something... I'm starting to think some DHT-like thing, where you get permission to add things if you help seed things, would maybe work
nicolagreco_ has joined #ipfs
nicolagreco has quit [Read error: Connection reset by peer]
nicolagreco__ has joined #ipfs
computerfreak has quit [Quit: Leaving.]
cemerick has joined #ipfs
nicolagreco_ has quit [Ping timeout: 252 seconds]
<chungy>
Hmm. I don't know how feasible it is, but what might be cool is an official URL shortener built into IPFS
<chungy>
Still base58, but take a cue from YouTube and only do ~11 characters. Should be plenty to go around, generate them randomly, and something similar to "ipfs publish" for attaching them to a longer object hash
<chungy>
I'd say make them immutable, too. The main trick though is coordinating it with the whole network :/
reit has joined #ipfs
reit has quit [Client Quit]
computerfreak has joined #ipfs
ecloud has quit [Remote host closed the connection]
ecloud has joined #ipfs
nicolagreco has joined #ipfs
disgusting_wall has joined #ipfs
nicolagreco___ has joined #ipfs
nicolagreco__ has quit [Ping timeout: 260 seconds]
nicolagreco has quit [Read error: Connection reset by peer]
nicolagreco___ is now known as nicolagreco
<nicolagreco>
daviddias: ping! you around?
hashcore has quit [Quit: Leaving]
<lgierth>
nicolagreco: he's in the middle of something
flobs has joined #ipfs
computerfreak has quit [Remote host closed the connection]
<nicolagreco>
no rush :)
conway has quit [Ping timeout: 276 seconds]
<Arakela007>
hey i have serious issue btw sorry for my english. best thing of addressing (linking) data by sha is that u know what u get, but your unix file system mapped on DagNodes is not so good. i mean u need to know in advance just looking at link it is linked to file or to directory
ecloud has quit [Remote host closed the connection]
<whyrusleeping>
VictorBjelkholm: how would get you get permission on such a system?
<whyrusleeping>
also, a blockchain doesnt necessarily mean paying
ecloud has joined #ipfs
<whyrusleeping>
miners have the ability to add things as they want
<whyrusleeping>
maybe it gets tied into bitswap credit or something like that
<whyrusleeping>
at some point you need consensus
<Akaibu>
whyrusleeping: do you think it's notable enough to put the hydrus network into one of the weekly ipfs post? I think for the most part it is done updating for ipfs related stuff, so it might be good to do it now
<Akaibu>
(Sorry for the word vomit)
<whyrusleeping>
Akaibu: yeah, that would be a good addition to the weekly!
<Akaibu>
So would you do it, because I don't know how that work
<whyrusleeping>
if the one listed there doesnt work, file an issue and tag me in it (Or alternatively file a PR to fix it :D )
<csn>
I think that's the exactly same example, six months old
<csn>
Because.. erm.. code.google.com/p/go.net/context and stuff
<whyrusleeping>
okay, could you file an issue on that repo with the error output?
infinity0 has joined #ipfs
Kubuxu has quit [Read error: Connection reset by peer]
<csn>
deps errors mostly and it might be that I'm not just following the procedure correctly but I'll file, sure
Magik6k_ has joined #ipfs
Kubuxu has joined #ipfs
Magik6k_ has quit [Max SendQ exceeded]
Magik6k has quit [Ping timeout: 276 seconds]
kaiza has quit [Remote host closed the connection]
Magik6k has joined #ipfs
kaiza has joined #ipfs
zootella has quit [Quit: zootella]
<whyrusleeping>
csn: yeah, if the guide doesnt make it dead simple to build things, then thats a bug
<apiarian>
when i tried to use the guide i ran into issues with certain dependencies in github.com/ipfs/go-ipfs coming from gx while the example links their normal internet names
<csn>
apiarian, that is exaclty my issue too
<apiarian>
ya, so if you want to use something that is go-libp2p with github.com/ipfs/go-ipfs you may need to import it as "gx/ipfs/QmNefBbWHR9JEiP3KDVqZsBLQVRmH3GBG2D2Ke24SsFqfW/go-libp2p/p2p/peer"
<csn>
I have tried to manually install the gx deps but they get confused like "can't use gx/ipfs/1234/jbenet/util.Hash as github.com/jbenet/util.Hash"
<apiarian>
yup, so change your github.com/... stuff to gx/ipfs/... stuff from the error messages. what can go wrong, eh?
<csn>
But I don't have any imports of jbenet et cetera in my code
<apiarian>
hmm, that sounds like an issue with your ipfs install?
<apiarian>
does ipfs build correctly?
<csn>
Exactly, maybe I'm just dumb and can't get the environment set up correctly
<apiarian>
i followed the instructions in the github.com/ipfs/go-ipfs readme
* apiarian
is not an expert, just went through this over the past couple of weeks
<csn>
I haven't built go-ipfs on this wimn10 machine
<apiarian>
is that windows 10?
<csn>
Yeah, with some typos
<apiarian>
hmm. then i may be of no help here. i'm on a mac
<csn>
It has been like this: go get examples, plain "go get" to fetch deps, manually install gx deps per errors, they get confused.
<apiarian>
well it seems that some parts of the code are pullin in the github version of util hash and others are getting the gx/ipfs version. so... grep through the code?
<csn>
I wish I was driving GNU. Now I must first learn the grep equivalent on this system..