<m10r>
Is it already possible to buy/trade FileCoin for BTC? Or in general buy FileCoin?
<musoke[m]>
filecoin doesn't really exist yet
mildred1 has quit [Read error: No route to host]
mildred4 has quit [Read error: No route to host]
<m10r>
Any plans when it launches ?
<m10r>
Or any other way to invest in IPFS?
mildred1 has joined #ipfs
<M-anomie>
Similar question, if filecoin is successful should I be able to buy drives that pay for themselves?
mildred4 has joined #ipfs
maxlath has joined #ipfs
lassulus has quit [Remote host closed the connection]
lassulus has joined #ipfs
lassulus has quit [Changing host]
lassulus has joined #ipfs
enzo__ has quit [Quit: Leaving]
clemo has quit [Remote host closed the connection]
clemo has joined #ipfs
<r0kk3rz>
thats the general idea, people pay you to host stuff
<Quiark_>
if the thing really takes off, the market will take care of optimizing and squeezing margins, it will be hard to earn anything significant. see: Bitcoin mining
<r0kk3rz>
yes i imagine it will mostly be datacentres operating much like they do now
<r0kk3rz>
economies of scale still apply
Soulweaver has joined #ipfs
espadrine` has joined #ipfs
<Mateon1>
Quiark_: True, but providing cheap storage is better than wasting power on hashing, I think.
<m10r>
Quiark_: do you think storage providing will be done by a few major players on a scale ? Like it is the case with bitcoin mining ?
<lemmi>
i wouldn't be surprised if a sizeable amount came just from people that happen to have some space and bandwidth to play with
ianopolous has quit [Remote host closed the connection]
maxlath1 has joined #ipfs
maxlath has quit [Ping timeout: 255 seconds]
maxlath1 is now known as maxlath
john3 has joined #ipfs
<Quiark_>
yeah it may end up being Amazon S3 IPFS Filecoin (TM). Because serving up your disk space from your home may earn you 0.0000....1$
<Mateon1>
m10r: You could put up bounties on IPFS issues, that will certainly get more attention to these issues, making IPFS better. Not aware of any direct investing possibility
cxl000 has joined #ipfs
<lemmi>
Quiark_: but as long it doesn't cost you, why wouldn't you do it?
jaboja has quit [Remote host closed the connection]
<r0kk3rz>
Quiark_: remember people share stuff on bittorrent for $0
<lemmi>
especially if you are running a node anyway for other stuff
<r0kk3rz>
filecoin isnt the only way to get stuff hosted anyway
maxlath has quit [Ping timeout: 246 seconds]
maxlath1 has joined #ipfs
<Quiark_>
sure I was looking at it from money making perspective
<Quiark_>
similar to PoS staking on Ethereum - will be probably only available to well equipped datacenters. Others may even lose money on it
maxlath1 is now known as maxlath
rendar has joined #ipfs
reit has quit [Quit: Leaving]
[X-Scale] has joined #ipfs
X-Scale has quit [Ping timeout: 260 seconds]
[X-Scale] is now known as X-Scale
maxlath1 has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
maxlath1 is now known as maxlath
bedeho has quit [Remote host closed the connection]
rcat has joined #ipfs
Guest235322[m] has joined #ipfs
robattila256 has quit [Quit: WeeChat 1.8]
btmsn has joined #ipfs
bedeho has joined #ipfs
<Soulweaver>
Is it possible to get stats on how many people have pinned a hash, or how many people are seeding a hash?
<lemmi>
Soulweaver: ipfs dht findprovs
m3lt has quit [Ping timeout: 260 seconds]
corvinux has joined #ipfs
erikj has quit [Quit: kfonx]
robattila256 has joined #ipfs
<Soulweaver>
Should this command take a long time to output?
<Kubuxu>
Soulweaver: it will try to find up to 20\
bedeho has quit [Remote host closed the connection]
<Soulweaver>
Ok I see, I was being dumb. I thought it would work if I had the <hash>/file.ext, seems it won't, but it works fine on the folder contaning everything, should I assume that list is providing every file in that folder?
gmcabrita has joined #ipfs
<Magik6k>
Soulweaver, no, it means that the peer has that directory block(It will usually have the files too, but I wouldn't depend on that). To check if peer has a file you first need to do `ipfs resolve [hash]/some/path` to get file hash and then run findprovs on it.
<Soulweaver>
Ahh I see, I feel dumb lmao
Boomerang has joined #ipfs
erikj has joined #ipfs
erikj has joined #ipfs
erikj has quit [Changing host]
jaboja has joined #ipfs
dimitarvp has joined #ipfs
reit has joined #ipfs
jkilpatr has quit [Ping timeout: 255 seconds]
robattila256 has quit [Quit: WeeChat 1.8]
dimitarvp` has joined #ipfs
dimitarvp has quit [Disconnected by services]
dimitarvp` is now known as dimitarvp
jaboja has quit [Ping timeout: 240 seconds]
jkilpatr has joined #ipfs
bedeho has joined #ipfs
aedigix has quit [Remote host closed the connection]
aedigix has joined #ipfs
jaboja has joined #ipfs
sein has joined #ipfs
sein is now known as Guest1521
archpc has quit [Ping timeout: 268 seconds]
bhstahl has joined #ipfs
bhstahl has quit [Client Quit]
MuNk` has quit [Ping timeout: 255 seconds]
MuNk` has joined #ipfs
bhstahl has joined #ipfs
jaboja has quit [Ping timeout: 260 seconds]
MuNk` has quit [Quit: leaving]
bhstahl has left #ipfs [#ipfs]
bhstahl has joined #ipfs
bhstahl has quit []
bhstahl has joined #ipfs
<xelra>
I still don't get how this "hashing little blocks" works. So if I have a text file from someone and then I edit it, will I still be a peer for the original file or parts of it?
<xelra>
And what if we have totally different files that by coincidence share common blocks. Will those mix? I just don't get how that can work.
<r0kk3rz>
a block with the same hash is *the same block*
<r0kk3rz>
so yes, common blocks will mix because they are in effect, the same thing
<xelra>
I'm just wondering. Basically the hash is a simplification (compression) of the block that identifies it. But if even the blocks can be the same by coincidence, wouldn't the hashs collide even more?
<xelra>
Possibly pointing to blocks that are NOT identical?
Boomerang has quit [Ping timeout: 246 seconds]
Boomerang has joined #ipfs
<r0kk3rz>
that depends on the hashing algo, which are designed to reduce collisions
ebarch has quit [Quit: ebarch]
ebarch has joined #ipfs
maxlath has quit [Ping timeout: 246 seconds]
<voker57>
collisions are statistically highly unlikely, especially if you attempt to collide meaningful data. But theoretically yes, this is possibility
corvinux has quit [Remote host closed the connection]
maxlath has joined #ipfs
sirdancealot has quit [Remote host closed the connection]
arpu has quit [Remote host closed the connection]
jaboja has joined #ipfs
bhstahl has quit [Remote host closed the connection]
arpu has joined #ipfs
bedeho has quit [Remote host closed the connection]
JayCarpenter has joined #ipfs
chris6131 has joined #ipfs
asyncsec has joined #ipfs
shizy has joined #ipfs
<xelra>
So is it even more unlikely that two blocks from different files are the same?
bedeho has joined #ipfs
<Kubuxu>
xelra: you have 1 to 115792089237316195423570985008687907853269984665640564039457584007913129639936 chance that two different blocks will have the same hash'
userydfkumydfjyt has joined #ipfs
btmsn has quit [Ping timeout: 255 seconds]
Soulweaver has quit [Read error: Connection reset by peer]
<xelra>
I see. That's low. I remember with e2dk, when the network grew, you'd sometimes get something completely different.
<xelra>
Then the chance that two blocks are identical must be even lower than that.
<xelra>
So the idea is that blocks are not identical by coincidence, because of low chance. You only get the same blocks, even from different files, because they're somehow related?
<r0kk3rz>
they could be revisions for instance
<victorbjelkholm>
yeah, guessing websites are a good example. If every website uses the same lib + version of that lib, it'll be shared by everyone together, even though the hash for the complete website is different
<Mateon1>
Ideally we would like to make that as low as possible, but currently keeping network connections open to other peers consumes a lot of memory. I have trouble running IPFS on a 1GB vps, within a couple of days it dies from OOM
userydfkumydfjyt has quit [Quit: Leaving]
<Mateon1>
2GB should be enough for everything it wants, even with heavy use, but there could still be memory leaks in IPFS
<Mateon1>
(And very likely they are - alpha software)
<Mateon1>
there* are
<kvda>
great thanks for the info Mateon1 :)
<kvda>
go-ipfs install instructions are out of date in terms of how Go paths are setup
<Mateon1>
Oh, did something change?
<victorbjelkholm>
another way to deal with OOM is to add swap, if you can't get more memory
<Mateon1>
victorbjelkholm: I tried that in the past, the provider killed my server due to insane IO thrashing
maxlath has quit [Ping timeout: 240 seconds]
<victorbjelkholm>
Mateon1: oh, which provider? Had no issue doing that on DO
<Mateon1>
Well, doesn't matter, I'll change the provider once I run out of credit
<kvda>
Mateon1 yea the go-ipfs instructions doesn't mention GOBIN
<Mateon1>
kvda: Ah, haven't heard about that. Which Go version is that?
<Mateon1>
kvda: I know the readme, but I don't know which Golang version made the GOBIN change, I didn't know about it. You can submit a PR if you know about it
<Mateon1>
Hm, that might be your network blocking certain ports
<pidgen>
oh and webui at localhost:5001/webui also tries to connect forever
<Mateon1>
pidgen: It tries to request it from IPFS, if there are no peers, it won't have the content
<pidgen>
i've checked with external port scanner. upnp opened port is open
<Mateon1>
Try connecting to this node, it's one of the public bootstrap nodes: ipfs swarm connect /ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd
<pidgen>
how to debug this thing?
<Mateon1>
If it throws an error, there might be outgoing port filtering
<pidgen>
ok i do beleive the problem is it tries to use ipv6
<pidgen>
how to force it to v4 only?
<Mateon1>
Hm, interesting
<Mateon1>
The /ip4 should force it...
<Mateon1>
You can also try /ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd
<Mateon1>
It's the same node, but using ipv6, assuming you have a working configuration
<pidgen>
nah ipv6 doesnt work for me
<pidgen>
from the error message, i assumed it uses ipv6 still. since "dial attempt failed: dial tcp6"
<pidgen>
note the tcp6
<Mateon1>
Yep, this time I intended to try ipv6
<pidgen>
despite the link tells it to go through v4
<pidgen>
am i correct?
<Mateon1>
You might want to disable ipv6 completely if it is broken
<pidgen>
ok ill try
jmill has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
ligi_ has joined #ipfs
<Mateon1>
sudo sysctl net.ipv6.conf.eth0.disable_ipv6=1 - this will disable ipv6 on the eth0 interface (until next reboot)
<pidgen>
i did this: sudo sysctl net.ipv6.conf.wlp2s0.disable_ipv6=1
<Mateon1>
Did anything change?
<pidgen>
restarted daemon. but it still doesnt work. the message is still the same
<Mateon1>
Hm
<pidgen>
swarm shows no peers
<pidgen>
in fact daemon still shows Swarm listening on /ip6/::1/tcp/4001
tg has quit [Excess Flood]
<pidgen>
among other things
<Mateon1>
Hold on, I'm looking for nodes that have alternative ports
taaem has quit [Ping timeout: 246 seconds]
<Mateon1>
Hm, well, that's tricky
<pidgen>
wha?
<Mateon1>
I see no nodes that I am connected to, which have common ports like 80, 8080, 443
<pidgen>
it's kind of weird if after "ipfs swarm connect /ip4/" it shows error about tcp6
<Mateon1>
If you had a firewall, these would likely bypass
btmsn has joined #ipfs
<pidgen>
no, i dont have a firewall. just NAT
tg has joined #ipfs
<pidgen>
all other stuff works fine (syncthing, chatting apps)
<pidgen>
they seem to lease ports through upnp, just like ipfs did
<pidgen>
is it normal that it tries to use tcp6 for ip4 links?
<Mateon1>
Hm, if you /msg me an address for your IPFS node (get it with ipfs id), I'll try connecting. Let's see if I can connect
<dsal>
I finally made a key I could use to store all the keys I want to pin.
<dsal>
er, hashes. Whatever. heh
<Mateon1>
pidgen: I think, maybe. ipfs swarm connect treats addresses more as a suggestion, rather than a rule
<Mateon1>
It mostly cares about the /ipfs/Qm... part
<Mateon1>
pidgen: I connected successfully, do you have a peer now?
<pidgen>
tons of them!
<pidgen>
why couldn't fins the peers before?
<pidgen>
and webui now works
<pidgen>
weird
<Mateon1>
You might wish to save a few that you have in ipfs swarm peers (into ipfs bootstrap). It seems that the public gateways were somehow broken for you
<pidgen>
ok thnx
<pidgen>
does it save them automatically?
<Mateon1>
No, unfortunately
<Mateon1>
You have to do that manually
<Mateon1>
As in, ipfs bootstrap add /ip4/.../tcp/.../ipfs/Qm... - will save it
<pidgen>
ok thnx
<pidgen>
do they plan to implement autosaving all the seen nodes?
<Mateon1>
Probably not all seen nodes, but I do believe an automatic bootstrap would be useful, let me search IPFS issues
<dsal>
Taking forever to find my objects, though.
<shyamsk_mob>
Hey ppl, was authentication ever implemented? The private key store kinda thing
<shyamsk_mob>
So that private networks may exist.
<Mateon1>
shyamsk_mob: There exists `ipfs key`, but that's only for IPNS
<Mateon1>
I can resolve the IPNS name, but not the hash it points to
<dsal>
points to my node. But things can't get to my node.
Encrypt has joined #ipfs
<dsal>
My node's connected. This is basically how I've been doing things, but today it doesn't work.
<shyamsk_mob>
Mateon1: right now? Why change it? This seems like the most logical way. Why create additional head aches (like you said)?
<Mateon1>
shyamsk_mob: I don't think there is a need to change it - poor choice of words. But thought was given in the design for other possible implementations of private networks
<dsal>
I'm able to ping some of the nodes with ipfs.io content. It's a little weird that I can't ping myself.
<dsal>
Well, ipfs fails me today.
sirdancealot has joined #ipfs
Encrypt has quit [Quit: Quit]
flyingzumwalt has quit [Ping timeout: 240 seconds]
flyingzumwalt has joined #ipfs
cdata has quit [Ping timeout: 246 seconds]
pidgen has quit [Quit: Page closed]
cdata has joined #ipfs
espadrine` has quit [Ping timeout: 245 seconds]
maxlath1 has joined #ipfs
maxlath has quit [Ping timeout: 246 seconds]
maxlath1 is now known as maxlath
<dsal>
Everything trying to access these objects hangs -- including ipfs.io That's interesting.
dgrisham has quit [Ping timeout: 240 seconds]
jmill has joined #ipfs
bhstahl has joined #ipfs
gastrolith has quit [Ping timeout: 255 seconds]
bhstahl has quit [Ping timeout: 260 seconds]
jaboja has joined #ipfs
ShalokShalom_ has joined #ipfs
<whyrusleeping>
dsal: hrm... your node is online with data and others cant access it?
jkilpatr_ has joined #ipfs
<whyrusleeping>
dsal: hrm... i'm having trouble establishing a connection to your node.
<whyrusleeping>
What is the external ip/port it should have open?
<dsal>
whyrusleeping: Yeah, I can't seem to get into my node from the outside. Not sure why.
<dsal>
There shouldn't be any direct connection in.
<whyrusleeping>
oh
<whyrusleeping>
so nobody can dial *in* to your node?
<dsal>
right
john3 has joined #ipfs
<whyrusleeping>
is that expected behaviour? or not what youre hoping would happen?
jaboja has quit [Quit: Leaving]
<dsal>
This is how I've been running things. I'm surprised it stopped working.
galois_dmz has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
cranky-sleep has joined #ipfs
sdgathman has joined #ipfs
<sdgathman>
theshadowbrokers.bit points to a zeronet site. Why would someone use zeronet? The python reference implementation probably helps deployment - but how does the design compare?
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<sdgathman>
Ah, zeronet signs the hash of a site. ipfs hashes files (and even blocks).
ashark has quit [Ping timeout: 268 seconds]
ashark has joined #ipfs
wak-work has quit [Remote host closed the connection]
jaboja has quit [Ping timeout: 240 seconds]
ashark has quit [Ping timeout: 260 seconds]
robattila256 has joined #ipfs
reit has joined #ipfs
dgrisham has quit [Quit: WeeChat 1.8]
droman has quit []
btmsn has quit [Ping timeout: 268 seconds]
jaboja has joined #ipfs
wak-work has joined #ipfs
cxl000 has quit [Quit: Leaving]
pawn has joined #ipfs
<pawn>
Is a content address just a hash of the file's content or is is more complicated than that?
taaem has quit [Ping timeout: 240 seconds]
Mateon1 has quit [Remote host closed the connection]
reit has quit [Ping timeout: 260 seconds]
bhstahl has joined #ipfs
Jesin has quit [Quit: Leaving]
bhstahl has quit [Ping timeout: 268 seconds]
<whyrusleeping>
pawn: its a little more complicated
<pawn>
explain? :D
<whyrusleeping>
the hash is the hash of the merkletree that represents the file
<pawn>
I have 15 minutes
<whyrusleeping>
so large files are broken up into chunks
<flyingzumwalt>
if it's larger than 256k ipfs will break the file up into chunks.
Jesin has joined #ipfs
* whyrusleeping
steps aside for flyingzumwalt to explain
<pawn>
AFAIK it's a hash of two hashes which are themselves a hash of two hashes and so on, until you get to leave hashes which are hashes of pieces of data?
<pawn>
leaf*
sonata has joined #ipfs
cwahlers has joined #ipfs
<whyrusleeping>
that would be one way to do a binary merkle tree
<whyrusleeping>
a merkle tree in general is a tree where hashes are used to link to children instead of pointers
<pawn>
children?
<pawn>
You mean other hashes?
<whyrusleeping>
a tree has child nodes, right?
<whyrusleeping>
generally, when you implement a tree, you have pointers to those child nodes
<whyrusleeping>
in a merkle tree, you have the hashes of those child nodes, instead of a memory pointer
chris6131 has left #ipfs [#ipfs]
<pawn>
isn't that effectively the same thing?
<pawn>
What I mean is, a hash of a child node does the same job as a pointer to a child node
<pawn>
it links to the child node in some way
infinity0_ has joined #ipfs
infinity0_ has quit [Remote host closed the connection]
<pawn>
I'll have to revisit this discussion unfortunately (have to leave). Though thanks for the reading material (urls). I'll have to dig deeper into this topic before asking a bunch of questions too
robattila256 has quit [Ping timeout: 268 seconds]
<pawn>
cheers! :)
<whyrusleeping>
right, but its an immutable pointer