lgierth changed the topic of #ipfs to: go-ipfs 0.4.4 has been released with an important pinning hotfix, get it at https://dist.ipfs.io | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/vVBS0 | Sprints: https://git.io/voEAh
_whitelogger has joined #ipfs
<vtomole> so the file is not available on ipfs.io until i request it with a link?
<achin> right. if nobody ever requests your file, then your file will only ever exist on your local node
<vtomole> Thank you!
PseudoNoob has quit [Quit: Leaving]
cemerick has joined #ipfs
pfrazee has joined #ipfs
vtomole_ has joined #ipfs
<vtomole_> Doesn't this also mean that somebody can guess a hash and see a file that was meant to be private?
<achin> yes
<achin> but probability of doing that is pretty much zero
pfrazee has quit [Ping timeout: 250 seconds]
chax has quit [Remote host closed the connection]
chax has joined #ipfs
<vtomole_> Someone could just have a program that continually searches the "hash space" until they find something, right?
<achin> yes, but the universe would end before they found something useful :P
<whyrusleeping> vtomole_: technically, yes
<vtomole_> Ohh so the security is that the search space is just so damn large
chax has quit [Remote host closed the connection]
gmcquillan has quit [Ping timeout: 256 seconds]
taaem has quit [Ping timeout: 260 seconds]
<whyrusleeping> vtomole_: yeap
<whyrusleeping> and we reserve the right to make the hash space larger :P
<whyrusleeping> but the secrecy of hashes isnt the security model for ipfs
<whyrusleeping> if you really want things to be secure you'll use encryption and private networks
<achin> while private networks don't really exist today, you can still encrypt your private data
<vtomole> ahh i see. This technology is so cool!
* whyrusleeping gives achin the "I know what i'm talking about" plus sign
<achin> \o/
chax has joined #ipfs
<achin> vtomole: ipfs is pretty neat, yeah!
vtomole_ has quit [Ping timeout: 260 seconds]
<achin> i do kinda wonder if something will happen in the next 5 years that'll make sha-256 generally unusable (like how md5 is generally unusable today)
robattila256 has quit [Ping timeout: 250 seconds]
<vtomole> If quantum computers happen in 5 years, otherwise maybe unlikely... we can't predict the future though :D
<vtomole> If i'm not mistaken, If i don't run ipfs daemon, then i cant access my files from ipfs.io cause ipfs daemon is what serves my files, right?
<achin> there could be some new attack on the algorithm
<achin> vtomole: correct
<achin> but there is a small (but important) exception
<achin> if your files have already been cached on the ipfs.io machines (or any other running daemon), then your ipfs daemon doesn't need to be running
<achin> as ipfs doesn't care *where* a file comes from. any node that has the file can provide it to anyone who asks
captain_morgan has quit [Ping timeout: 245 seconds]
<vtomole> Theoretically, how would companies like twitter or Facebook put their websites on ipfs? There is no central server, so how would they store their databases and other types of data. Does this demand that websites have to be built in an entirely different way?
<achin> that's an important question
<achin> since a central database is a key part of almost all modern websites
<achin> to be honest, i'm not sure what the current thinking is
<achin> for a fully distributed and writable database that could be used for such apps
<vtomole> Maybe how ZeroNet works? I've heard it's not on ipfs.
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
OutBackDingo_ has quit [Remote host closed the connection]
OutBackDingo has joined #ipfs
cemerick has quit [Ping timeout: 256 seconds]
captain_morgan has joined #ipfs
pfrazee has joined #ipfs
captain_morgan has quit [Read error: Connection reset by peer]
dignifiedquire has quit [Quit: Connection closed for inactivity]
Xe has quit [Ping timeout: 250 seconds]
daviddias has quit [Ping timeout: 250 seconds]
daviddias has joined #ipfs
em-ly has quit [Ping timeout: 250 seconds]
richardlitt has quit [Ping timeout: 250 seconds]
sickill has quit [Ping timeout: 250 seconds]
hosh has quit [Ping timeout: 250 seconds]
cdata has quit [Ping timeout: 250 seconds]
Xe has joined #ipfs
sevcsik has quit [Ping timeout: 250 seconds]
feross has quit [Ping timeout: 250 seconds]
risk has quit [Read error: Connection reset by peer]
henriquev has quit [Ping timeout: 250 seconds]
risk has joined #ipfs
feross has joined #ipfs
hosh has joined #ipfs
cdata has joined #ipfs
em-ly has joined #ipfs
richardlitt has joined #ipfs
Taek has quit [Ping timeout: 250 seconds]
sevcsik has joined #ipfs
sickill has joined #ipfs
henriquev has joined #ipfs
xa0 has quit [Ping timeout: 250 seconds]
Taek has joined #ipfs
xa0 has joined #ipfs
xa0 has joined #ipfs
xa0 has quit [Changing host]
ne1 has quit [Ping timeout: 250 seconds]
ne1 has joined #ipfs
anewuser has quit [Quit: anewuser]
chax has quit [Remote host closed the connection]
zopsi has quit [Quit: Until next time.]
chax has joined #ipfs
chax has quit [Remote host closed the connection]
chax has joined #ipfs
chax has quit [Remote host closed the connection]
chax has joined #ipfs
chax has quit [Ping timeout: 260 seconds]
ylp has quit [Ping timeout: 260 seconds]
ylp has joined #ipfs
pfrazee has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
mgue has quit [Ping timeout: 256 seconds]
rgrinberg has quit [Remote host closed the connection]
zopsi has joined #ipfs
kvda has joined #ipfs
mgue has joined #ipfs
seharder_ has joined #ipfs
seharder_ has quit [Client Quit]
seharder_ has joined #ipfs
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
<whyrusleeping> achin: vtomole: The current mode of thinking is to use chains of trust paired with publish subscribe and CRDTs
<whyrusleeping> i recommend taking a look at orbit and orbit-db
ygrek has quit [Ping timeout: 256 seconds]
<achin> thanks for the hints!
<achin> is pubsub in the master branch yet?
<whyrusleeping> yeap!
<achin> sweeeeeet
<whyrusleeping> you have to enable it when running the daemon by passing the --enable-pubsub-experiment flag though
<achin> any compatibility issues when talking to non-pubsub-enabled peers?
<whyrusleeping> nope
* achin plays with all the [new] things
seharder_ has quit [Quit: seharder_]
<whyrusleeping> :D
<whyrusleeping> i'm working on some really nifty super duper secret things right now
seharder_ has joined #ipfs
<achin> :O
<kevina> whyrusleeping: have a sec, re https://github.com/ipfs/go-ipfs/pull/3325
<achin> gimme a hint! how many syllables! sounds like kittens you say! can you publish cats over ipfs!
vtomole has quit [Ping timeout: 260 seconds]
<achin> quick question on "ipfs pubsub pub" is the <data> field a file name? an ipfs hash? a string? something else?
Kane` has quit [Ping timeout: 256 seconds]
Kane_ has joined #ipfs
Kane_ is now known as Kane`
<whyrusleeping> achin: 'ipfs pubsub pub TOPIC DATA'
<achin> so like 'ipfs pubsub pub cats mrow'
<whyrusleeping> yeeap!
<whyrusleeping> kevina: sure
<whyrusleeping> looking
<whyrusleeping> kevina: yeah, you don't need to worry about the filestore for ipld
<whyrusleeping> 'ipfs add' will continue using the protobuf format
<kevina> okay, so what then is this new DAG format for?
<kevina> (sorry have not been following the IPLD discussion)
<kevina> links to where this is documented is fine
<kevina> whyrusleeping: ^
gmcquillan__ has joined #ipfs
gmcquillan__ is now known as gmcquillan
<whyrusleeping> Yeah, let me grab some ipld links
<whyrusleeping> entry: https://github.com/ipld/specs
seharder_ has quit [Quit: seharder_]
<kevina> okay, I guess I will have to read up on the spec before I can be of any use reviewing
seharder_ has joined #ipfs
<kevina> whyrusleeping: test cases will still be good :)
<whyrusleeping> yeap!
<whyrusleeping> good call out there
<kevina> thanks
arpu has quit [Ping timeout: 256 seconds]
<achin> whyrusleeping: another quick pubsub question: in the pubsub api, what's the binary data in the "from" field?
wmoh has joined #ipfs
mgue has quit [Quit: WeeChat 1.6]
<whyrusleeping> the from field is the peer ID of the peer who sent the message
<achin> thanks
* achin has so many ideas now
<achin> when will it be the weekend
<whyrusleeping> every days the weekend
mgue has joined #ipfs
arpu has joined #ipfs
<achin> pls send memo to my boss on this important subject
bwerthmann has joined #ipfs
djdv has quit [Quit: leaving town, be back eventually]
ylp has quit [Ping timeout: 260 seconds]
ylp has joined #ipfs
Waex8iWei0ahJ7p has joined #ipfs
Aranjedeath has joined #ipfs
<whyrusleeping> achin: sure thing
<whyrusleeping> every days the weekend, especially when work feels like a hobby
fleeky__ is now known as fleeky
Aranjedeath has quit [Quit: Three sheets to the wind]
Aranjedeath has joined #ipfs
iovoid has quit [Remote host closed the connection]
iovoid has joined #ipfs
BananaMagician has quit [Ping timeout: 260 seconds]
chris6131 has quit [Read error: Connection reset by peer]
seharder_ has quit [Quit: seharder_]
briarrose has quit [Quit: bye bye]
galois_dmz has quit [Read error: Connection reset by peer]
galois_dmz has joined #ipfs
pfrazee has quit [Remote host closed the connection]
briarrose has joined #ipfs
briarrose has joined #ipfs
briarrose has quit [Changing host]
tmro has quit [Ping timeout: 256 seconds]
gmcquillan has quit [Quit: gmcquillan]
todder has joined #ipfs
tmro has joined #ipfs
tmro has quit [Ping timeout: 256 seconds]
tmro has joined #ipfs
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
vandemar has quit [Read error: Connection reset by peer]
Aranjedeath has quit [Quit: Three sheets to the wind]
vandemar has joined #ipfs
vandemar has joined #ipfs
vandemar has quit [Changing host]
Oatmeal has joined #ipfs
sdgathman has quit [Quit: ZNC - http://znc.in]
sdgathman has joined #ipfs
ulrichard has joined #ipfs
zopsi has quit [Quit: Until next time.]
dignifiedquire has joined #ipfs
zopsi has joined #ipfs
lacour has quit [Quit: Leaving]
rendar has joined #ipfs
mildred has joined #ipfs
gigq_ has quit [Ping timeout: 260 seconds]
gigq has joined #ipfs
Kane` has quit [Remote host closed the connection]
polezaivsani has joined #ipfs
sametsisartenep has joined #ipfs
ylp1 has joined #ipfs
dmr has joined #ipfs
xnaveira has joined #ipfs
maxlath has joined #ipfs
G-Ray has joined #ipfs
captain_morgan has joined #ipfs
tmro has quit [Read error: Connection reset by peer]
s_kunk has quit [Ping timeout: 260 seconds]
<A124> pjz HTTPS proxy with basic auth should do. Or you can use SSL client certs.
<A124> What is the replication you will be offering? Just 1? And any takedown notice, or instant? Cause bogus DMCA would ruin non-infringing stuff.
lkcl has quit [Ping timeout: 252 seconds]
ianopolous has quit [Ping timeout: 252 seconds]
chungy has quit [Quit: ZNC - http://znc.in]
chungy has joined #ipfs
captain_morgan has quit [Ping timeout: 250 seconds]
s_kunk has joined #ipfs
captain_morgan has joined #ipfs
spilotro has quit [Ping timeout: 250 seconds]
spilotro has joined #ipfs
captain_morgan has quit [Ping timeout: 256 seconds]
maxlath has quit [Ping timeout: 250 seconds]
maxlath has joined #ipfs
cemerick has joined #ipfs
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
spilotro has quit [Ping timeout: 252 seconds]
maxlath has quit [Ping timeout: 260 seconds]
robattila256 has joined #ipfs
tmro has joined #ipfs
<dignifiedquire> hey daviddias are you around?
spilotro has joined #ipfs
lindybrits has joined #ipfs
<daviddias> morning dignifiedquire :)
funrep has joined #ipfs
<dignifiedquire> good morning :)
<dignifiedquire> how is life?
<daviddias> doing good 😃)
<lindybrits> dignifiedquire and daviddias - Morning! :)
ilyaigpetrov has joined #ipfs
<daviddias> going to start merging things (which takes a bit because of the npm i npm release dance)
PseudoNoob has joined #ipfs
<daviddias> as I merge, you can start rebasing the async crypto PR :)
<daviddias> morning lindybrits !
<dignifiedquire> daviddias: look "ipfs files ls /npm-registry |wc -l => 238747"
<dignifiedquire> good morning lindybrits
<dignifiedquire> merge all the things
<daviddias> dignifiedquire: you are cloning npm?
<dignifiedquire> then rebase all the things
<dignifiedquire> yes
<dignifiedquire> been working on npm-on-ipfs at night
<daviddias> on top of the existing PR?
<dignifiedquire> and filling up whyrusleepings harddrives
<dignifiedquire> yes
<daviddias> sweet :D
<dignifiedquire> there is a PR with my changes up
<daviddias> ahah
<dignifiedquire> but very much a wip still
espadrine has joined #ipfs
wallacoloo_ has quit [Quit: wallacoloo_]
<daviddias> will review after IPLD, thank you! Looking forward to it
<lindybrits> dignifiedquire and daviddias - Once you guys have time today (see you are busy with merging :) ) could you guys perhaps provide some insight into https://github.com/ipfs/js-ipfs/pull/525. I am receiving the following error in browser - WebSocket connection to 'ws://127.0.0.1:9090/socket.io/?EIO=3&transport=websocket' failed: Connection closed before receiving a handshake response
<lindybrits> Thanks! :)
<daviddias> lindybrits: that is because the example is assuming people have a WebRTC signalling server on
<daviddias> you can start one by `npm i libp2p-webrtc-star -g`
<daviddias> and then running `star-sig` on your terminal
<lindybrits> Ah hah! I'll do that. Thanks!
<daviddias> lindybrits: what is your use case? :)
<lindybrits> daviddias: working on getting a PoC up and running that combines Ethereum and IPFS in the browser :)
<lindybrits> I only started looking at IPFS 2 weeks ago, with absolutely no experience in Node.js!
<daviddias> sounds good :D
<daviddias> have you seen my talk at DEVCON2? We got the ethereum-vm running with libp2p in the browser
<lindybrits> I have not seen it. Sounds interesting! I'll have a look at it today :)
<daviddias> looking for the video
<lindybrits> You'll be seeing a lot of me on this chat room in the future. I am very keen for IPFS
<lindybrits> Also
mildred1 has joined #ipfs
<daviddias> awesome!
<lindybrits> With my Node experience (which is none haha) I'll be needing a lot of help ;D
mildred has quit [Ping timeout: 245 seconds]
<daviddias> apparently they have only uploaded the demos I showed - https://ethereumfoundation.org/devcon/?session=libp2p
<daviddias> lindybrits: keep questions coming :) we are also making the process of getting it all running quite easier
<daviddias> dignifiedquire: so, is that run with ipfs that has sharding?
<daviddias> how is the memory?
<dignifiedquire> yes that is with sharding
<daviddias> and how is IPFS when it comes to memory usage?
<lindybrits> daviddias: I will do! exciting times. Checking https://github.com/ipfs/js-ipfs/pull/485 daily as well :)
<dignifiedquire> probably lots, but not sure how much of the consumed memory on the machine is actually from the ipfs daemon I'm using
<dignifiedquire> but it is pretty stable after whyrusleeping fixed a couple of bugs
<daviddias> dignifiedquire: wanna give your last approval here: https://github.com/ipfs/js-ipfs-repo/pull/89#pullrequestreview-5455420 (if you still disagree, I'll revert it back, no need to force that)
<dignifiedquire> just ship it, I can change it back later if I feel like it ;)
<lindybrits> daviddias: Whoo hoo! No more errors :) Thanks a lot
<daviddias> wooot! :D
maxlath has joined #ipfs
domanic has joined #ipfs
_boot has joined #ipfs
<_boot> Hi all, I added a bunch of flacs to ipfs yesterday, trying to play them from a different machine and it works but my network is going crazy, only a few hundred KB/s download but like 7 to 9 MB/s upload while trying to play... what is happening?
installgen2 has quit [Read error: Connection reset by peer]
installgen2 has joined #ipfs
Mizzu has joined #ipfs
<A124> _boot It is trying to get all the blocks, likely, not only how fast you play them.
<A124> What is the role of +v meaning on this channel exactly?
<_boot> but the upload rate is going mad on the machine trying to play them
<_boot> ...what is it uploading?
domanic has quit [Quit: Konversation terminated!]
domanic has joined #ipfs
<A124> Well, someone might have noticed you got flac files and it is mirroring them.
<A124> When you add anything it does tell few peers that you got certain data - publishes to network.
PseudoNoob has quit [Quit: Leaving]
maxlath has quit [Ping timeout: 244 seconds]
<_boot> hmm
funrep has quit [Ping timeout: 265 seconds]
<_boot> can i throttle its upload? it saturates my network and then chokes its download rate :|
<A124> _boot Directly no, people did report luck with .. something I forgot (trickle does not work).
<A124> It is part of kernel, made for shaping traffic.
<_boot> iptables
<A124> No.
<_boot> nftables?
<A124> tc
<_boot> ah
<dignifiedquire> daviddias: this repo needs to be deprecated correct?
<_boot> okay, will check it out - thanks
<A124> Welcome.
domanic has quit [Quit: Konversation terminated!]
<daviddias> correct, wanna do it?
domanic has joined #ipfs
<dignifiedquire> sure
<A124> Nergal has a question about what led to choosing LevelDB, and I second that:
<A124> [ Nergal ]: how did you come to chose leveldb for your scanner?
<A124> ^ jbenet whyrusleeping
funrep has joined #ipfs
* A124 wonders what is needed to be worthy +v
<A124> Do you think a one man can pull of another language rewrite of IPFS?
lindybrits has quit [Quit: Page closed]
<dignifiedquire> A124: that depends on your comitment and available time
<Nergal> aahh, i missed the answer to the leveldb choice. Going to the logs in the other client
<A124> Nergal No answer yet.
domanic has quit [Quit: Konversation terminated!]
domanic has joined #ipfs
maxlath has joined #ipfs
Waex8iWei0ahJ7p has quit [Quit: leaving]
<daviddias> A124: you can totally do it
<daviddias> specially, because once you start it, other people will certainly join
<daviddias> and since IPFS and libp2p are more modular than ever
<daviddias> you can implement piece by piece, without having to sync in a ton of context every time
bastianilso__ has joined #ipfs
steefmin_ has joined #ipfs
steefmin_ is now known as steefmin
steefmin has quit [Read error: Connection reset by peer]
mildred1 has quit [Ping timeout: 250 seconds]
Kane` has joined #ipfs
BananaMagician has joined #ipfs
domanic has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
ugjka has quit [Remote host closed the connection]
lkcl has joined #ipfs
ugjka has joined #ipfs
mildred has joined #ipfs
<daviddias> With support by default to dag-pb (the MerkleDAG you know and love) and the new dag-cbor (no more protobufs needed!)
<daviddias> and you can resolve through one data struct in cbor to a data struct in pb! https://github.com/ipld/js-ipld-resolver/blob/master/test/test-ipld-all-together-now.js
<daviddias> Try it out! :D
<dignifiedquire> daviddias: I am so happy we made the ipld interfaces already async, so much less work for me in the update process
Galematias has joined #ipfs
domanic has quit [Quit: Konversation terminated!]
domanic has joined #ipfs
Galematias has quit [Client Quit]
Nergal has left #ipfs [#ipfs]
BananaMagician has quit [Ping timeout: 260 seconds]
evil_filip has joined #ipfs
seharder_ has joined #ipfs
<pjz> A124: Just 1 to start - maybe more later, though that would likely increase cost. Maybe as an option?
<A124> pjz Yes, as option. Some people want to publish things like html pages and alike, which are small, but want them to be available, so would gladly pay for factor of 3.
<pjz> A124: re: bogus DMCA: nice thing about hashes is that they have a low likelihood of accidental regex matches, which is the way that stupid anti-pirates scan for links
<pjz> A124: OTOH, ipfs itself tends to make things 'more available'
<A124> Also I recommend implementing a credit system, so people do not have to use transactions for each file they publish.
<pjz> A124: the submission form will take multiple hash, and each hash can of course be a directory or whatever.
<A124> pjz Does not matter, say I update my code and want new code published and (depending on what it is) do not care about the old.
<A124> That way one can manage resources much better.
<pjz> A124: ahh. Hm, have to think about that.
<pjz> accounts are so much more trouble than fire-and-forget :)
<pjz> A124: you can always pay for the minimal amount of time you want up front, and extend it if you need to
domanic has quit [Quit: Konversation terminated!]
<locusf> so ipld addresses are resolvable now?
<A124> And that said, depending on your location and metered B/W there should or should not be cost of pinning the stuff and a time window (1day?)
<A124> pjz if you want to, but what if car hits you and you are in hospital.
domanic has joined #ipfs
<A124> Files gone. If you are making a system to eliminate single point of failure, do not move that single point to user.
<pjz> maybe future feature. for now I'm trying to get the basics limping.
<A124> pjz Also please notify me when you got anything done/for testing.
<locusf> with ipfs command ?
seharder_ has quit [Quit: seharder_]
<pjz> A124: will do; I'll probably do a limited beta here with free pins and then switch to for-pay and announce it on reddit
evernite- has quit [Quit: ZNC 1.6.1 - http://znc.in]
<A124> pjz Actually credit system would be simpler, as you can just scan a table that does store the durations and each day substract ammont for 24 hours.
<pjz> A124: you have no idea of the architecture :) And that idea fails if people pay for less than a day
funrep has quit [Ping timeout: 245 seconds]
evil_filip has quit [Ping timeout: 260 seconds]
<A124> Then use 1h periods.
<A124> The question is: What architecture?
<pjz> App architecture
<A124> Sure I do not, but that is what I am talking about.
<A124> If you just blanket state I have no idea, that means: I do it my way.
rgrinberg has joined #ipfs
<A124> I got experience with systems that are paid by credits, and they can pre pre-allocated or used later. People tend to use those as they do not have to bother with multiple payments.
<pjz> right, so right now the whole thing is fronted by Lambda and S3, requests go into SQS, and a process on one of the ipfs nodes (which run on random VPSs around the world) takes them off the SQS queue and turns them into log entries in the status files on S3. Other processes on those nodes scan the status files for any hashes in a 'requested' state and then switch their state and pin them. Later they notice that
<pjz> the time is elapsed and unpin them.
<pjz> designed to scale to lots of ipfs nodes
<daviddias> dignifiedquire: :D yeah, it required an extra day of refactoring, someone had to do it, happy that work got shared
<dignifiedquire> daviddias: :) I am very gratefull
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
domanic has quit [Quit: Konversation terminated!]
domanic has joined #ipfs
pfrazee has joined #ipfs
anewuser has joined #ipfs
domanic has quit [Remote host closed the connection]
ylp has quit [Ping timeout: 260 seconds]
ylp has joined #ipfs
domanic has joined #ipfs
funrep has joined #ipfs
soloojos has joined #ipfs
soloojos has quit [Remote host closed the connection]
soloojos has joined #ipfs
anonymuse has joined #ipfs
funrep has quit [Ping timeout: 250 seconds]
domanic has quit [Quit: Konversation terminated!]
installgen2 has quit [Quit: ZNC 1.6.3 - http://znc.in]
seharder_ has joined #ipfs
maxlath has quit [Quit: maxlath]
mildred has quit [Ping timeout: 245 seconds]
domanic has joined #ipfs
seharder_ has left #ipfs [#ipfs]
funrep has joined #ipfs
Boomerang has joined #ipfs
domanic has quit [Quit: Konversation terminated!]
domanic has joined #ipfs
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
Boomerang has quit [Ping timeout: 260 seconds]
Boomerang has joined #ipfs
domanic has quit [Quit: Konversation terminated!]
domanic has joined #ipfs
mildred has joined #ipfs
shizy has joined #ipfs
PseudoNoob has joined #ipfs
Boomerang has quit [Remote host closed the connection]
Guest59851 has joined #ipfs
Boomerang has joined #ipfs
<A124> Is there any reason why the gateway nodes etc do not use IPv6 or am i doing something wrong?
taaem has joined #ipfs
<lgierth> A124: we haven't gotten to it yet. definitely will have ipv6 gateways and bootstrap this quarter
rgrinberg has quit [Ping timeout: 256 seconds]
<A124> Thanks for info. Bootstrapping to a single node that has both ipv4 and ipv6 would do the job to discover ipv6 peers?
anewuser has quit [Quit: anewuser]
<lgierth> a single ipv4 peer will already do the job
<lgierth> since peer routing (the propagation of addresses) doesn't really care that much about what your node is capable of
<lgierth> you just get a list of all the known addresses for a peer and then you can filter what's usable
<lgierth> btw it also turns out that ipfs so far is pretty bad at forgetting old addresses
<A124> What are the implications?
<lgierth> oh just huge address lists for longlived nodes
<A124> And to confirm, ipv6 node that would use a single ipv6 peer that also does ipv4, would work to discover other ipv6 peers, correct?
<lgierth> try `ipfs dht findpeer` with one of the peerids from `ipfs bootstrap`
<lgierth> A124: it doesn't need any ipv6 peers
<lgierth> the peer routing is independent of which transports you actually use
<A124> if it is ipv6 only?
maxlath has joined #ipfs
<A124> But yeah, I got the answer indirectly.
<lgierth> then it currently has a problem because the default bootstrap nodes don't speak ipv6 yet
<_boot> i see ipv6 peers when using findpeer
<A124> Any node can be used as bootstrap no?
<lgierth> _boot: yup there are plenty-ish, just the default bootstrap nodes don't speak ipv6
<lgierth> A124: yeah correct
<A124> Problem solved.
<_boot> ah okay
<A124> Thanks.
<lgierth> ipfs dht findpeer QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM | wc -l
<lgierth> 687
<lgierth> that ^ is pluto.i.ipfs.io
<A124> 687 peers? Why some people get low number of peers?
<lgierth> that's not the number of peers
<lgierth> that's the number of addresses that the network knows for pluto
<A124> Oh, damn.
domanic has quit [Quit: Konversation terminated!]
* A124 needs to start up a node to look.
<lgierth> the few ipv6 addresses in that list are cjdns addresses (fc00::/8)
domanic has joined #ipfs
<lgierth> ok gotta run, lunch time
<A124> Thanks, enjoy!
stwcx has left #ipfs ["WeeChat 1.4"]
apiarian has quit [Ping timeout: 256 seconds]
dguttman has joined #ipfs
<rschulman> lgierth: Why does pluto have 687 addresses??
dguttman has quit [Remote host closed the connection]
dguttman has joined #ipfs
soloojos has quit [Remote host closed the connection]
soloojos has joined #ipfs
Boomerang has quit [Read error: Connection reset by peer]
apiarian has joined #ipfs
<A124> I see 5 addresses, two localhost one docker and two public.
jedahan has joined #ipfs
bastianilso__ has quit [Quit: bastianilso__]
bastianilso__ has joined #ipfs
jedahan has quit [Remote host closed the connection]
rgrinberg has joined #ipfs
matoro has quit [Ping timeout: 244 seconds]
domanic has quit [Quit: Konversation terminated!]
domanic has joined #ipfs
Geertiebear has joined #ipfs
Boomerang has joined #ipfs
jedahan has joined #ipfs
jedahan_ has joined #ipfs
jedahan_ has quit [Remote host closed the connection]
jedahan_ has joined #ipfs
ylp1 has quit [Quit: Leaving.]
jedahan has quit [Ping timeout: 265 seconds]
maxlath has quit [Ping timeout: 260 seconds]
<polezaivsani> Can somebody sanity check me - raw dag nodes are to be distinguished with new cids, right?
grosscol has joined #ipfs
ulrichard has quit [Read error: Connection reset by peer]
mrd0ll4r has quit [Ping timeout: 250 seconds]
steefmin has joined #ipfs
misuto has quit [Ping timeout: 260 seconds]
jedahan_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
maxlath has joined #ipfs
jedahan has joined #ipfs
domanic has quit [Quit: Konversation terminated!]
domanic has joined #ipfs
bsm1175321 has joined #ipfs
up_dn has joined #ipfs
anonymuse has quit [Read error: Connection reset by peer]
anonymuse has joined #ipfs
ylp has quit [Ping timeout: 260 seconds]
ylp has joined #ipfs
espadrine has quit [Remote host closed the connection]
espadrine has joined #ipfs
domanic has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
galois_d_ has joined #ipfs
<daviddias> this return is inside an async method (waiting for done to be called)
<daviddias> and nothing is picking on that return value
<daviddias> is that needed at all?
barnacs_ has quit [Ping timeout: 245 seconds]
<dignifiedquire> this whole file makes me cringe everytime I look at it, but it wasn't written by me, I just fixed it to work with the new code and left the rest as untouched as possible
<dignifiedquire> maybe with async hashes it's time to fix..
galois_dmz has quit [Ping timeout: 250 seconds]
up_dn has quit [Remote host closed the connection]
<daviddias> tell me about it...
ygrek has joined #ipfs
<daviddias> the whole module
<dignifiedquire> hey the rest is not so bad :)
barnacs has joined #ipfs
<lgierth> rschulman: because the network doesn't drop old addresses
misuto has joined #ipfs
domanic has quit [Ping timeout: 260 seconds]
<rschulman> lgierth: Ahh, gotcha. Didn't realize the nodes changed addresses that much.
<lgierth> docker container's internal address, and random ports for upnp
_boot has quit [Quit: leaving]
edrex has quit [Read error: Connection reset by peer]
edrex has joined #ipfs
<dignifiedquire> daviddias: let me know if I can help with unixfs
<dignifiedquire> gotta go in 5 but can take a look later/in the morning
Bat`O has quit [Ping timeout: 260 seconds]
Bat`O has joined #ipfs
Mizzu has quit [Quit: WeeChat 1.5]
<Kubuxu> polezaivsani: yes, only nodes running very recent 0.4.5-dev will know them
<Kubuxu> will know how to use them
<polezaivsani> thanks Kubuxu
<Kubuxu> older nodes won't even transfer them
<Kubuxu> as we had to upgrade bitswap protocol
gmcquillan__ has joined #ipfs
gmcquillan__ is now known as gmcquillan
kulelu88 has joined #ipfs
maxlath has quit [Ping timeout: 260 seconds]
<daviddias> dignifiedquire: working on it, refactoring the flush-tree.js a lot to make it more sane
<daviddias> help is welcome :)
<daviddias> I'm getting close, but it took me a bit to make sure I understood everything that was going there again
<dignifiedquire> cool
<daviddias> just push the latest commit (git ammended the wip one)
<dignifiedquire> I found out why my shasums are failing
<dignifiedquire> I wasn't piping streams properly
<daviddias> every shasum failed?
<dignifiedquire> yeah because the tars are empty
<dignifiedquire> 😢
<dignifiedquire> looking much better :)
<dignifiedquire> could use some async in the last part of next
<dignifiedquire> but I have the check implemented now, so it won't happen again :)
galois_d_ has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
<pjz> what did I forget to do to make localhost:5001 available from my ipfs daemon?
chax has joined #ipfs
<A124> Can IPFS be run (in theory) with minimum bandwidth and cpu and 100MB max memory?
<pjz> A124: bandwidth for a long-lived node last I heard was around 300kbps
<A124> Yes, that is practice. I talk about future. I feel sacrificing performance to retain development compatibility might not be the best idea, and not sure what paths and things will IPFS end up.
<pjz> A124: my node (started yesterday, just got on the DHT, no traffic) currently has around 18M RSS, 216M VSZ
<pjz> A124: so you can maybe get away with 100MB RAM but you'll need some swap
<A124> Like now protobuf + ipld. And content addressing of each chunk means having a database of chunks and sha256 hashes both wrapped in protobuf and not wrapped in protobuf.
<A124> And the correlation between them, complicating content addressing in general.
<pjz> A124: are you talking about the new diskstore stuff?
<A124> No, say you have a iso of Ubuntu.
<pjz> A124: because the previous impl copied the data into the IPFS blockstore, and people didn't like it b/c it meant files took up 2x disk space - one on the fs, one in the blockstore
<pjz> "You have a iso of Ubuntu."
* pjz smiles.
<A124> You cannot just use hashed chunks of 256k, you have to wrap it in protobuf and hash that.
<pjz> ...sure, and?
s_kunk has quit [Remote host closed the connection]
<A124> And that complicates content addressing.
<pjz> well, maybe
Boomerang has quit [Quit: Leaving]
<pjz> the first blockstore tookt he file and broke it up and put it into the blockstore as blocks
<pjz> wrapped in protobuf and etc
<pjz> though "wrapped in protobuf" isn't quite right
<A124> So either everyone has to have ipfs compatible protobuf chunker (currently no issue) and hasher (specific) or they have to rely on hash databases.
<pjz> that's true no matter how you hash though
<A124> And someone will have to make the databases, which means there will be nodes downloading stuff just to hash it.
<pjz> ...what? "downloading stuff just to hash it" ? makes no sense. If they're downloading it, they already know the hash, as that's how they downloaded it.
<A124> pjz You are incoherent and your text is not comprehensible by context.
<A124> Also sure I know how ipfs works.
<pjz> A124: I was thinking the same thing about you.
<A124> Open your closed mind, everyone will not have IPFS node.
<pjz> if they're getting access to IPFS files, they ahve to go through an IPFS node. That node may be a gateway, sure, but there's still an IPFS node involved.
<pjz> And to make any kind of request from an IFPS node, a hash is required.
<dignifiedquire> daviddias: so the runtime issue got overwhelming feedback ;)
<A124> pjz Why are you telling me really basic stuff.
<dignifiedquire> gonna move forward tomorrow if nothing happens so we can get stuff shipped
<pjz> A124: because you don't seem to understand it.
<pjz> A124: "
rgrinberg has quit [Ping timeout: 265 seconds]
<pjz> A124: "downloading stuff just to hash it" makes no sense
<A124> How long you are here?
<pjz> A124: please explain
<pjz> A124: I'm here for another few hours
G-Ray has quit [Quit: Konversation terminated!]
<daviddias> dignifiedquire, l want to review the PR before the merge, but sounds good to move forward
<A124> People that want to use different system then IPFS but want to locate data not dependent on specific (IPFS) implementation.
<A124> Deduplication, archival, recovery, distribution, whatever.
<daviddias> also, let's make sure to have in libp2p-crypto an issue to track support to all browsers, with those notes you have in the PR
<daviddias> (which things have to be shimmed)
<A124> pjz I have not seen you around here till yersterday or so, yet you go explaining something. Same thing happened several months ago with someone else.
<pjz> A124: I've been hanging out here for a couple years I think
<pjz> A124: I usually lurk
<A124> You just obscured my question that will likely not get attention now, unless I quote people which I am trying to avoid.
<A124> Then it seems surpring that you have not thought of compatibility.
<pjz> A124: which question? the one about bandwidth and RAM?
espadrine has quit [Ping timeout: 250 seconds]
<A124> Obviously.
<pjz> A124: theoretical minimal bandwidth is whatever it takes to participate in the DHT or whatever p2p data-routing protocol is used in the future
<pjz> A124: obviously
<A124> And yes, such databases would be needed, so there will be nodes downloading stuff just to hash it. So there are databases for compatibility.
<pjz> databases of what?
<A124> Forget it.
<dignifiedquire> only for everyone else
<A124> IPFS multihash <-> raw data hash
<pjz> oh. Well, only non-IPFS nodes would need/want that, and it seems to me more likely that someone would just set up an ipns-named directory
<A124> directory for what?
<dignifiedquire> daviddias: you will have to review all them prs anyway, but just want to make sure i can go ahead and start fixing and cleaning up everywhere
<pjz> consider an ipns directory that was like .../by_rawhash/<raw_hash> -> ipns content
<pjz> store that as a directory in ipfs and you can then reference content by the raw hash easily
<A124> That's still a database
<A124> And centralized, so makes no sense, pubsub would make.
<A124> So... what is the outcome if this legthy conversation for you?
<pjz> sure, whatever. I don't see it as a big deal because right now compatability isn't a big priority.
<A124> For me it's nothing, so it better be something.
<A124> "I don't see it as a big deal because right now compatability isn't a big priority."
<pjz> yeah, now I understand what you're talking about.
<A124> - City planning problem.
<A124> First they do not care and build everywhere where was no cow road, then the infrastructure is too large to change.
<pjz> it's not a design goal, AFAIK
<A124> Without disrespect you seem to be failing to see big picture of projects and it's impact.
<A124> What is not a design goal?
<pjz> compatibility with other hashing systems
<A124> ...
<pjz> are you saying it should be a major design goal?
<A124> You know what implications are and what indirection is?
<pjz> yes
<A124> Then you fail.
<pjz> ?
<A124> If you want a system to perform well, you do not necessarily create obstructions for compatibility and adoption.
<pjz> The reason for the incompatibilty (eg use of multihash) is to facilitate future expansion; compatibility is secondary to that.
<A124> So stuff has to be carefully considered. And in case of data blocks there is no reason not to have raw data level addresing capability.
<A124> pjz Again, that has nothing to do with it.
<pjz> Maybe I'm not sure what you're arguing in favor of. Switching to using raw hashes instead of multihash? block-internal addressing? something else?
<A124> I'm not arguing.
<A124> {Data:"hello world", size:11}
<pjz> What are you advocating in favor of, then? I thought you were saying you thought IPFS should use raw hashes instead of multihash.
<A124> Gives you different hash then 'hello world'
<A124> Encoding multihash is not a problem for compatibility.
<A124> If someone gives you multihash (which is just sha256) of a single block, you do not know or cannot know if you got same data.
<A124> You have to hash everything again wrapped with protobuf.
<A124> Ie. no separation of data level and tree level.
matoro has joined #ipfs
ygrek has quit [Ping timeout: 252 seconds]
<A124> And from a viewpoint of having a hash, it is no different, you do not know size of an object, unless you request it from someone that has it.
<A124> And you cannot be certain data is correct, until you got it whole.
<A124> So there is no reason for not separating that layer.
<A124> pjz Any further questions?
espadrine has joined #ipfs
chax_ has joined #ipfs
<pjz> You can ask the size of a hash from a remote node
<pjz> you don't have to download it to get the size
chax has quit [Ping timeout: 260 seconds]
<A124> Kubuxu Who should I turn to weigh in please? I do not know how to even clearly formulate that to write up a short issue, that everyone would understand, as people tend to not see the context so the issue would wind up, and I would like to see people in statue to weight in and react. Thanks.
<A124> pjz Skipped a line?
<A124> Oh, "unless you request it from someone that has it." Unless you request the size, "obviously".
<A124> Simply said I do not see a single benefit for not having a raw data layer.
<pjz> ah, I see. by 'it' you mean 'the size' not 'the object', but it's a bit unclear because the antecedent is part of a prepositional phrase
<A124> pjz Did I help you? Did you gain anything?
<A124> I hope so.
<A124> pjz Also your "obvious" minimum bandwidth is pretty wide.
<A124> Given now you send whole want list each time, which means also work for other nodes.
funrep has quit [Ping timeout: 244 seconds]
<pjz> A124: the point of that was that using the current DHT is not a settled decision
<pjz> A124: so it's subject to change
<A124> I am aware of that.
<A124> And that is why I asked, else I would not ask
<A124> pjz Did you gain anything from the conversation?
ygrek has joined #ipfs
<pjz> A124: I think so. I think I understand that you think that blocks should be raw data and hashes should be in a separate type of structure for better compatibility
<pjz> A124: is that correct?
<A124> Good. Glad, else it would be 50 minutes of waste.
<A124> It does not need to be separate, neither on storage or distribution layer inherently. Though storage would benefit from that, given fixed sized blocks are used and they are binary multiple, it helps with a lot of things.
<A124> The thing is one can add one layer of addressability of the content, using raw data hash, without breaking anything else, and gaining the compatibilit of not having to track IPFS spec and to dual hash, using protobuf.
<pjz> that presumes that there's only one hashing algorithm ever
funrep has joined #ipfs
<pjz> and it stays the same forever
<A124> Given that world would just not use IPFS only. Which would be a nice world, but if there is commerce, I feel other data solutions will still be used.
<A124> pjz Said who.
<A124> Nothing presumes anything, it would still be multihash format.
<pjz> okay, so where would this raw data be accessible? just in the storage layer? would those blocks be network-available?
<A124> Just stripped of the abstraction layer, for improved addressability, not having two layers at block level.
<pjz> addressability from where?
<A124> Network.
<pjz> The problem is that they're not self-describing blocks then
<A124> But the benefits would go deep down to Filesystem if relevant.
<A124> That does not make any problem.
<pjz> which currently Blocks are all self-describing because of the content-id at the beginning
Oatmeal has quit [Ping timeout: 245 seconds]
<A124> I did talk about the type of data used for binary blocks.
<A124> Not objects.
<A124> So just a single content id, unless I am wrong.
Qwertie has quit [Ping timeout: 245 seconds]
<A124> But in a way I am thankful, now I will try to push it, unless proven it is not a good thing.
<pjz> A124: right now there are content types for Protobuf, CBOR, Raw, and JSON
<A124> Link to the raw?
<pjz> it'
Qwertie has joined #ipfs
galois_d_ has joined #ipfs
<A124> I forget things, so I need to re-check and re-read. Which is how I forgot about this issue also.
<A124> Well I thought you got more around it.
<pjz> there's also constants for Ethereum and Bitcoin blocks and txns
<A124> But the idea is have the very same data that are now represented as Protobufs (data blocks of files), addressable by the raw data.
<pjz> a cid (content-id) is a version (of the cid) a codec (protobuf/cbor/raw/json) and a hash (multihash)
chax_ has quit [Remote host closed the connection]
<A124> There might be already some work, issue, or who do I know. Searching issues on github is not so precise matter.
chax has joined #ipfs
galois_dmz has quit [Ping timeout: 260 seconds]
chax has quit [Remote host closed the connection]
chax has joined #ipfs
<Kubuxu> A124: what is the issue?
<A124> Kubuxu Addressability of datablocks by content hash (without the protobuf wrapper)
<Kubuxu> So we just introduced raw leaf blocks.
<A124> Will that be implemented, is that planned? Basically it is just stripping the last layer of the onion.
<Kubuxu> Which means that files smaller than 256KiB will have the same hash as the file itslef.
<A124> Sweet.
<Kubuxu> problem with bigger files and addressing them just by the whole file hash is that it opens doors for whole domain of DoS attacks.
<Kubuxu> and reduces performance, and usability.
<A124> No, that is not my concern.
<A124> Each block of those larger files, to be addressable by hash of the data itself.
<Kubuxu> this is not something we are planning in near future.
<Kubuxu> I mean, each 256KiB chunk is addressed by the hash itslef
<A124> It is not, or am I wrong?
<A124> It is 256K + protobuf.
<Kubuxu> but whole 1MiB file will be addressed by hash of CBOR encoded, IPLD structure
<A124> No, no, no. Let me clarify.
<A124> Data itself = data of each chunk.
<A124> Just the data, without the protobuf layer wraping that single data block.
<Kubuxu> I still don't understand.
Mizzu has joined #ipfs
<Kubuxu> with the raw leaves, that were just introduced.
<A124> now: h({Data:"hello world", size:11})
<Kubuxu> s/leaves/leafs/
<Kubuxu> IPFS hash of that leaf, is the hash of the data itself
<A124> want: h(hello world)
<Kubuxu> yes, for small files
<A124> No, for larger files.
<A124> Ok, let's say chunk is 6 characters. now: h({Data:"hello ", size:6}), want: h(hello )
<A124> We clear now?
<Kubuxu> yes, it is not planned
<A124> Can we?
<Kubuxu> you could probably implement it as supplementary system
<A124> Supplementary addressing. So you could use either h(protobuf(data)) or h(data).
<A124> That would yield benefits in compatibility, addressability, storage, ...
<A124> That way storage layer could check integrity by hash without protobuf, being agnostic (apart from the chunk size) to system that does use it.
<Kubuxu> yeah, I think it should be possible, but it could be DoSed quite easily.
<A124> And at the same time one could implement anything on top of that system and share data with ipfs, given it would be data blocks / raw data.
<A124> Which would put IPFS in the position beneficial to itself and to others.
<A124> Kubuxu How that could be DoSed and current implementation cannot?
<Kubuxu> I download 100MiB file
<Kubuxu> I can check if you are sending me right file only after I download it whole.
<A124> That is not true, you got the hashes like now.
captain_morgan has joined #ipfs
<A124> h(protobuf(data)) or h(data)
<A124> data refers to single 256K block.
<A124> Or whatever the size changes / is.
rendar has quit [Ping timeout: 265 seconds]
<Kubuxu> so we have it for 256K blocks
<A124> 256K blocks of raw data?
<Kubuxu> yes
Mizzu has quit [Quit: WeeChat 1.5]
<A124> So if I have 256K chunk of some file (given it is aligned) the sha256 hash will be identical to the data block?
<A124> After multihash encoding.
<Kubuxu> yes
<A124> Then either I failed to follow, and it changed past months, or I got it explained wrong.
<A124> And was blindly rewriting code like a monkey without caring, got to check the source. And try. If so, that is awesome.
maxlath has joined #ipfs
<Kubuxu> it was just introduced
<Kubuxu> experimental feature so far
Encrypt_ has joined #ipfs
<Kubuxu> it will be enabled by default in 0.5
<A124> Does that mean that existing hashes will change, if the file get's added to IPFS?
Oatmeal has joined #ipfs
<A124> Yes, yes, this is great news. I did discuss this with you I think even and the core team, and with morph.is creator we were crunching numbers and trying to figure figure out a way of efficient content addressing and storage. I did came with multiple ideas, now that would fit together.
<A124> Have to find the switch.
<Kubuxu> yes, but as 0.5 introduces IPLD new hashes have to change either way
shizy has quit [Quit: WeeChat 1.6]
galois_d_ has quit [Remote host closed the connection]
shizy has joined #ipfs
<whyrusleeping> Kubuxu: incorrect
galois_dmz has joined #ipfs
<A124> Well, as it is development, it is not really a concern yet for us (whoever I talk in behalf of now - multiple groups). What is the experimental switch? Is that a specific branch?
<whyrusleeping> A124: for raw leaves?
<A124> Yes.
<whyrusleeping> just use master and pass the --raw-leaves flag to add
<whyrusleeping> note that those objects won't be backwards compatible with 0.4.4 or earlier
<A124> Thanks! Yes, aware.
<A124> I + We got plans for harnessing this both insinde and outside IPFS spanning to filesystem (userspace for now) level, flattening stacks and widening compatibilty with everything.
<whyrusleeping> A124: <3
<A124> Thank you, thank you for going the right way. <3
<whyrusleeping> i fought long and hard to get the ability to do raw blocks like this
<whyrusleeping> there were three solid days in lisbon a few months back of discussion on this topic
<whyrusleeping> so i'm really happy other people appreciate it :)
<A124> Yes, yes, I was the one advocating that few months ago.
chax has quit [Ping timeout: 245 seconds]
<A124> From several perspectives, this change is immense benefit.
chax has joined #ipfs
<whyrusleeping> Yeah, it has all sorts of really interesting benefits
Guest59851 has quit [Ping timeout: 256 seconds]
chax has quit [Remote host closed the connection]
chax has joined #ipfs
rendar has joined #ipfs
edrex has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
edrex has joined #ipfs
<Kubuxu> whyrusleeping: they won't be default in 0.5?
<whyrusleeping> Ah, you mean raw blocks.
<A124> I will not hide it, I got rush from that.
<whyrusleeping> Yeah, i think we will default to raw blocks in 0.5
<whyrusleeping> But still use protobuf for unixfs
<Kubuxu> that is an interesting choice
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<whyrusleeping> at least for now, its tried and tested, and works quite well
<whyrusleeping> plus, its more compact than cbor
<whyrusleeping> which matters for file storage
<whyrusleeping> plus, everything is ipld now
<whyrusleeping> :D
<seharder> Yesterday afternoon I was experimenting trying to remove content from IPFS. With two nodes, after adding content and then unpinned and "repo gc" on both nodes the I could still cat the content. If I restarted both daemons the content was then inaccessible/gone.
<seharder> It was suggested this might be a bug so I tried to reproduce the issue today
cemerick has quit [Ping timeout: 244 seconds]
<whyrusleeping> seharder: hrm... could be that the content was in some memory cache somewhere in ipfs...
<whyrusleeping> that feels like a bug at some level
<seharder> After about an hour of trying I am unable to make content go away.
<whyrusleeping> lol, something something permanent filesystem...
<A124> seharder The more you try, the more it can survive ;)
<A124> Use findprovs instead.
<seharder> truth
<whyrusleeping> seharder: mind detailing your process in an issue?
<whyrusleeping> its something we definitely want to look at
evil_filip has joined #ipfs
<seharder> I'm not familiar with findprovs
<A124> seharder Find provs might have similar effect though. Did you or someone try to fetch the content via public gateway?
mildred has quit [Ping timeout: 245 seconds]
<seharder> I can give that a try
<A124> Well, it just looks for nodes that have the content without fetching it.
<A124> The point is, if you use public node it gets caches.
<A124> So it will survive.
<seharder> One of the nodes is a digital ocean vps
<A124> *Cached. Well, even private one.
<A124> I do recommend trying that on random data. Was it a text, or what kind of file/data?
<seharder> a text file with "hello" in it
mildred has joined #ipfs
<A124> Someone today already did ask about upload being used. So I suspect there is someone (with grey, white, or even black motives) that does monitor stuff and tries to fetch everything.
<A124> seharder Really bad choice. I bet someone did pick exactly same file.
<A124> And by the nature of content addressing, even if you would only hash the file without adding then trying to get it by the hash, that would work.
<A124> I bet hello world file exists also.
<seharder> So this all makes sense. Probably not a bug??
<seharder> I will try same exercise with random data and report back.
evil_filip has quit [Remote host closed the connection]
<A124> Yes it does, just tried it.
<A124> It took solid 5 seconds, but it does.
<A124> ipfs cat $(echo "hello world" | ipfs add -n | cut -d' ' -f2)
polezaivsani has quit [Remote host closed the connection]
<A124> seharder yes, even if the random file would be fetchable, that does not yet mean it is a bug, just makes the chances more slim. Try short random string, and few MB random file. Wait a few minutes, then remove, repo gc, and try.
<whyrusleeping> yeah, i definitely have multiple variants of "hello[world|ipfs|people|.......]" added to ipfs
mildred has quit [Ping timeout: 260 seconds]
cemerick has joined #ipfs
mildred has joined #ipfs
<seharder> got it
mildred has quit [Quit: Leaving.]
mildred1 has joined #ipfs
chax has quit [Remote host closed the connection]
chax has joined #ipfs
elusiveother has joined #ipfs
<elusiveother> that's a lot of M-s
chax has quit [Ping timeout: 260 seconds]
lacour has joined #ipfs
<seharder> OK, ran same exercise with random content, after unpinning and "repo gc" the files are immediately gone.
<seharder> I was getting fooled because my content was not random. Sorry to waste everyone's time.
<A124> seharder time that helped is time well spent.
matoro has quit [Ping timeout: 245 seconds]
cemerick has quit [Ping timeout: 252 seconds]
evil_filip has joined #ipfs
chax has joined #ipfs
yoosi has quit [Remote host closed the connection]
yoosi has joined #ipfs
evil_filip has quit [Ping timeout: 245 seconds]
Geertiebear has quit [Quit: Leaving]
G-Ray has joined #ipfs
espadrine has quit [Ping timeout: 260 seconds]
grosscol has quit [Quit: Leaving]
<dignifiedquire> whyrusleeping: does files rm clear block from disk or does it only remove the association in the file system and delete on next gcc?
<whyrusleeping> just clears the association
<dignifiedquire> excellent
<dignifiedquire> cleaning up and starting again, do you want to check anything before?
chax has quit [Remote host closed the connection]
G-Ray has quit [Quit: Konversation terminated!]
<pjz> anyone know why my ipfs is config'd to listen to localhost:5001 but doesn't answer there? Did I forget to do something?
s_kunk has joined #ipfs
s_kunk has joined #ipfs
s_kunk has quit [Changing host]
<A124> pjz something else already bound on that port?
<pjz> no; telnetting to it gets me connection refused
rgrinberg has joined #ipfs
ZaZ has joined #ipfs
ianopolous has joined #ipfs
<pjz> why is fs-repo-migrations not part of go-ipfs?
<pjz> or at least shipped with it?
Encrypt_ has quit [Quit: Sleeping time!]
<pjz> turns out I have an old repo, so my node wasn't opening up its old api since I hadn't migrated my repo
edrex has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
edrex has joined #ipfs
funrep has quit [Quit: Lost terminal]
edrex has quit [Remote host closed the connection]
edrex has joined #ipfs
dmr has quit [Quit: Leaving]
aa__ has joined #ipfs
aa__ has quit [Client Quit]
ianopolous_ has joined #ipfs
anonymuse has quit []
Stskeeps has quit [Ping timeout: 244 seconds]
ianopolous has quit [Ping timeout: 245 seconds]
Stskeeps has joined #ipfs
edrex has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
edrex has joined #ipfs
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
galois_d_ has joined #ipfs
edrex has quit [Remote host closed the connection]
edrex has joined #ipfs
galois_dmz has quit [Ping timeout: 260 seconds]
arkadiy has joined #ipfs
workboot[m] has joined #ipfs
maxlath has quit [Quit: maxlath]
<arkadiy> hi guys, kind of a dumb question: what do we call this "/ip4/127.0.0.1/tcp/4001/ipfs/QmZnCmrSUtVZrjGcNikzqEM92RuHYuG5DZnAENiYirQqJY"? it's a multiaddress with an appended node ID, is that still a multiaddr?
bastianilso__ has quit [Quit: bastianilso__]
<arkadiy> and if so, is there a spec that addresses this?
mildred1 has quit [Ping timeout: 276 seconds]
<lgierth> arkadiy: yes it's all multiaddrs, if you wanna write out the specific type of multiaddr, that's multiaddr-filter syntax: /ip4/tcp/ipfs will match the address you posted
sametsisartenep has quit [Quit: zZz]
<lgierth> arkadiy: check out github.com/multiformats/multiaddr
<lgierth> the spec needs editing work, and support for more types such as /dns and /https
<arkadiy> yeah I am looking at that page
kvda has joined #ipfs
<arkadiy> doesn't really tell me the limitations on the format.... I am guessing it's supposed to be tuples of <protocol name>/<protocol-specific address, port, id, etc>
<lgierth> ah maybe i misread your question slightly -- it's still a multiaddr no matter what you append after /ip4 and /tcp
<lgierth> it can be anything really as long as it's representable as a unix path
<lgierth> let me find the issue where the /onion multiaddr was designed
<arkadiy> so if we want to have /ip4/54.173.71.205/tcp/9000/mediachain/Qmb561BUvwc2pfMQ5NHb1uuwVE1hHWerySVMSUJLhnzNLN
<lgierth> and yes the requirement is it's <protocol>/<data> tuples
<arkadiy> we need to get mediachain as a codepoint in https://github.com/multiformats/multiaddr/blob/master/protocols.csv
<arkadiy> right?
<lgierth> yes
<arkadiy> ok
<arkadiy> can we do that? :D
<arkadiy> I can open a PR
<arkadiy> not sure what the procedure is
workboot[m] has left #ipfs ["User left"]
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
ulrichard has joined #ipfs
<lgierth> wait i'm not actually too sure. /onion was about a transport, and you're addressing a protocol on top of the swarm right?
ulrichard has quit [Remote host closed the connection]
<lgierth> is that still the libp2p swarm stack of secio/multistream/etc.?
<lgierth> just with mediachain instead of ipfs?
<arkadiy> yeah, we're on the same stack for the most part
<arkadiy> but the exposed services are different, i.e. there's no guarantee that you can get objects from a mediachain node or anything like that
<arkadiy> we haven't quite figured out how the interop is going to work, mostly thinking of IPFS and mediachain as complementary sets of capabilities that a node can express
<arkadiy> maybe this is a longer conversation then?
<lgierth> yes /ipfs is basically just the bitswap protocol, with secio and layers of multistream underneath, and transports at the lowest
<lgierth> the difference in your stack is there's no bitswap
<arkadiy> so we have secio and multistream
<arkadiy> but not bitswap
<arkadiy> yeah exactly
<lgierth> i think you don't need an /ipfs multiaddr
<lgierth> eeh sorry /mediachain multiaddr
<arkadiy> we do need our own multiaddresses tho
<arkadiy> you don't think so?
<arkadiy> it seems like an /ipfs multiaddr implies bitswap, no?
<lgierth> but i quickly realized it's good to have to signify there's no bitswap, but mediachain
<lgierth> sorry for being confusing
<arkadiy> haha ok wait so what is your recommendation
<lgierth> a /mediachain multiaddr
<arkadiy> cool ok
<lgierth> or /secio/$peerid/mediachain
<arkadiy> shall I open a PR against that CSV?
<arkadiy> hmm
<arkadiy> the latter is interesting
<lgierth> i don't understand why /ipfs isn't /secio/$peerid/ipfs anyhow
<lgierth> arkadiy: yes go for it, but also describe the stack thing, i'm not sure i'm giving the best of answer here haha
<arkadiy> ok, sounds good, will open PR
<arkadiy> thanks!
<lgierth> do you implement the /ping and /identify protocols btw?
<lgierth> these run alongside bitswap in ipfs
arkadiy has quit [Quit: Page closed]
slothbag has joined #ipfs
ianopolous has joined #ipfs
smtudor has joined #ipfs
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
ianopolous_ has quit [Ping timeout: 256 seconds]
captain_morgan has quit [Quit: Ex-Chat]
captain_morgan has joined #ipfs
<pjz> ugh, pin ls docco is unclear
<pjz> if you 'ls direct' you don't get the top-level recursive pins, you only get direct ones
<pjz> ...even though the top object of a recursive pin is also a direct pin
<pjz> though I'm unclear if that's a bug in the docs or the functionality
smtudor has quit [Quit: WeeChat 1.5]
ulrichard has joined #ipfs
<A124> Just to confirm, the v0.4 files will still be retrievable with v0.5 or there would not be backwards compatibility?
<lgierth> they will still be retrievable
<A124> Is that a long term thing, aka v1?
<lgierth> it's not v1, but yes the new key format is there to stay
<lgierth> it's one of these occasions where we notice we've missed a spot when making everything designed to be pluggable
<A124> Well, I meant if the v0.4 files will be retrievable with v1
<lgierth> between v0.4.0 and v0.3.x the incompatible change was multistream-select at the lowest layer of the network stack
<lgierth> oh yeah definitely
<A124> That is what I wanted to hear. Not that I care, some other people started using ipfs, and would have had to update links. Thanks!
ygrek_ has joined #ipfs
ygrek has quit [Ping timeout: 245 seconds]
basilgohar has quit [Ping timeout: 260 seconds]
basilgohar has joined #ipfs
pfrazee has quit [Remote host closed the connection]
PseudoNoob has quit [Quit: Leaving]
<dignifiedquire> whyrusleeping: could you think of a reason why reading from mfs shortly after writing with flush false could just hang and never finish (I do a stats call which succeeds before)
chax has joined #ipfs
<A124> When two peers have same keys. Do they talk to each other or what happens?
<A124> I noticed that people with the router killer issue might also be using Tor. So that gave me this question
chax has quit [Ping timeout: 250 seconds]
chax has joined #ipfs
Codebird has quit [Read error: Connection reset by peer]
Codebird has joined #ipfs
shizy has quit [Ping timeout: 260 seconds]
Kane` has quit [Remote host closed the connection]
ZaZ has quit [Quit: Leaving]
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 244 seconds]
chax has quit [Remote host closed the connection]
chax has joined #ipfs
ulrichard has quit [Remote host closed the connection]
chax has quit [Ping timeout: 244 seconds]
chax has joined #ipfs
fleeky_ has joined #ipfs
fleeky has quit [Ping timeout: 260 seconds]
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
chax_ has joined #ipfs
chax has quit [Read error: Connection reset by peer]
chax_ has quit [Remote host closed the connection]
chax has joined #ipfs
chax has quit [Ping timeout: 244 seconds]
galois_d_ has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
haitch_ has joined #ipfs