<frood>
AniSkywalker: I wouldn't put too much stock in the filecoin whitepaper.
<AniSkywalker>
frood it's not half bad :P it's just a few little issues
<AniSkywalker>
and I think I figured out what I was missing frood
<AniSkywalker>
tau is essentially a record (without the economic aspect)
<frood>
there's some architectural stuff that I think necessitates a complete redesign
<AniSkywalker>
What I've found is that it seems disconnected from the PoR mechanism and the blockchain.
<AniSkywalker>
But the premise is fairly sound. frood, what would your architectural stuff comprise?
<AniSkywalker>
It's not too late for FileCoin spec 2 :P
<frood>
PoW block grinding is likely more profitable than adding storage/bandwidth
<frood>
the per-block piece bundle must be either too large to transmit, or too small to be meaningful
<AniSkywalker>
I agree w/ the second one.
<AniSkywalker>
For the first, I like the idea of proof of stake.
<frood>
slow PoR encoding implies that uploads will be restricted to single-digit Mbps
<AniSkywalker>
proof of stake means more reliable peers make the calls.
<AniSkywalker>
frood is there a fast PoR encoding?
<frood>
limited lifespan merkle proofs are pretty fast. or Sia style
<AniSkywalker>
frood how would that work in practice?
<frood>
how Sia does it: create a merkletree from chunks of the file, embed the root in the blockchain. require nodes to post a merkle proof of arbitrary chunks every n blocks.
<frood>
the data chunk + the proof is small enough to put on-chain, and it provides a strong probabilistic proof of retrievability after a few periods
<frood>
plus Sia penalizes servers that fail scheduled proofs
<AniSkywalker>
So with filecoin the bundle is only comprised of new pieces frood
<frood>
Shacham and waters seems to scale in linear time, while Sia-style merkle trees scale in log time.
<AniSkywalker>
So it's only whatever's been Put in the last block
<AniSkywalker>
That's the cost of hard proof of retrievability, no?
<frood>
the bundle also contains the challenge pieces
<AniSkywalker>
Sure, so in a proof-of-stake model only the top 100 people actually need the bundle
<AniSkywalker>
I.e. in Tendermint
<frood>
all validating nodes need the bundle
<frood>
the proofs verify the chain, but the bundle needs to be dispersed to prevent scarcity attacks
<AniSkywalker>
Right, so the validating nodes in Tendermint are the top 100
<AniSkywalker>
Which in Tendermint would mean that established, reputable hosts are given more burden to validate.
<frood>
are we expecting only 100 nodes to be storing pieces?
<AniSkywalker>
No, that's 100 nodes who need the bundle specifically for validation.
<AniSkywalker>
Other nodes can fetch it or parts at their leisure.
<frood>
assuming the validating nodes are honest. the filecoin paper notes that the bundle exists because there is an incentive to hoarde pieces without sharing
<frood>
and the bundle isn't needed for validation. the proofs suffice.
<frood>
(otherwise you couldn't verify the chain from genesis without all pieces ever)
<AniSkywalker>
Right. OK, so you are suggesting a change from hard PoRs to what Sia does--what makes FileCoin unique?
<frood>
no, I'm not suggesting that, just saying that I think it's a significant architectural problem to be overcome
<AniSkywalker>
Oh, for sure.
ylp has joined #ipfs
<AniSkywalker>
What I was thinking was that instead of passing around the bundle, FileCoin could be built on IPFS, and it could simply be transferred individually that way.
<AniSkywalker>
Since there's no promise of privacy anyways (encrypt it :P ) that means not everybody needs every block--only the validators.
<frood>
sure, you could have each bundle be an IPFS hash, but nodes still need to retrieve it to be sure that it's available.
<AniSkywalker>
Well, I mean that nodes wishing to host data / mine blocks download the parts of the bundle they want--i.e. to take a specific contract.
<AniSkywalker>
Not everybody needs everything.
<AniSkywalker>
But also, I'm wondering why the bundle is required for validation--is there a risk the record is invalid?
<frood>
then you get into situations where filecoin promises existence, but not availability.
<frood>
the bundle is not required for validation. it's required for preventing scarcity attacks via hoarding.
<AniSkywalker>
How would such an attack play out?
<AniSkywalker>
And why is it not resolved by peers hosting their content on IPFS until a few blocks go by and enough people to their satisfaction keep the data?
<frood>
that assumes IPFS nodes behave altruistically
<frood>
the attack is basically piece Qm...zzz is scarce in the network. I can prove Qm...zzz, and retrieve the reward, My incentive is to prevent others from proving piece Qm...zzz, so I will not make it available.
<frood>
without the bundle, nodes are incentivized to hold pieces and prove them, without providing access.
<AniSkywalker>
Ah, I see. What if the fundamental promise of FileCoin is to publish something on IPFS?
<AniSkywalker>
I.e. to make it available to the network?
<AniSkywalker>
I suppose you'd have a time proving that.
<frood>
IPFS is called the permanent web, but I've taken down 5 IPFS sites this week, because no other nodes bothered to retrieve the blocks.
<frood>
filecoin needs to promise existence and availability in order to be useful.
ianopolous has quit [Read error: Connection reset by peer]
<frood>
(and preferably a certain degree of performance)
ianopolous has joined #ipfs
<AniSkywalker>
frood isn't that what FileCoin is trying to solve, is incentivizing nodes to keep blocks?
<AniSkywalker>
Why couldn't that happen over IPFS?
<frood>
it could, but it can't rely on IPFS nodes behaving altruistically
<AniSkywalker>
Of course not, that's what the blockchain's for :P
<AniSkywalker>
and FileCoin for that matter
<AniSkywalker>
frood so the bundle only has to have the new pieces from the current block, correct?
<frood>
it also has the k challenge pieces
<frood>
which are randomly determined by the PoW element.
<AniSkywalker>
Ok, so how big can it get?
<AniSkywalker>
Is it semi-linear with the number of nodes storing?
<AniSkywalker>
Since the number of challenges would therefore rise?
<frood>
¯\_(ツ)_/¯
<frood>
k is a tunable difficulty parameter
ylp has quit [Ping timeout: 255 seconds]
<AniSkywalker>
Oh yeah forgot about k :)
<frood>
it's not detailed how big pieces are, or how many pieces are in the challenge bundle
<AniSkywalker>
frood what if k were the size of the previous bundle to reduce the size of the bundle if lots of challenges are being sent?
<frood>
I'd guess k is tuned automatically like bitcoin difficulty, but this is getting into speculation
<AniSkywalker>
Well that'd make sense.
<AniSkywalker>
Alright, I'm off for tonight, great talking frood
<kenshyx>
I have data from 0.4.4 and after upgrade to 0.4.5 it requires migration
<MikeFair>
kenshyx: is that from launchin ipfs daemon --migrate
<kenshyx>
y
<MikeFair>
kenshyx: Not that it's actually fixing the problem; but if you have another machine (or VM/jail) you can light up a second instance and init/fetch into the new version
<kenshyx>
I don't start it with migrate
<MikeFair>
ahh; same difference I think
<kenshyx>
I have huge data store :/
<MikeFair>
kenshyx: Then kiil your exist repo; fire it up and migrate back/pin
<MikeFair>
ahh
<MikeFair>
are there any logs?
<MikeFair>
is there a particular block it wants but doesn't have or that's causing problems?
* MikeFair
has never looked; is just assuming there would/should be
<kenshyx>
ERROR: failed to convert flatfs datastore: remove /pinned/blocks-v4/CIQPM: directory not empty
<MikeFair>
does ipfs pin ls work?
<MikeFair>
I'm thinking capture the pinlist; upgrade; then put the pins back
<kenshyx>
that's so bad >_>
<MikeFair>
Or wait for these guys
<MikeFair>
google search?
<MikeFair>
(on the error)
<MikeFair>
You could manually make the directory empty probably to clear that error; but I don't know the consequences
espadrine has joined #ipfs
ShalokShalom has quit [Remote host closed the connection]
ShalokShalom has joined #ipfs
aquentson has joined #ipfs
maciejh has quit [Ping timeout: 240 seconds]
jkilpatr has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
suttonwilliamd has quit [Quit: Leaving]
suttonwilliamd has joined #ipfs
suttonwilliamd has quit [Remote host closed the connection]
suttonwilliamd has joined #ipfs
leeola has joined #ipfs
grosscol has joined #ipfs
maciejh has joined #ipfs
maxlath has joined #ipfs
DiCE1904 has joined #ipfs
tmg has joined #ipfs
ylp has joined #ipfs
<MikeFair>
/msg AphelionZ For whenever you see this: I got the updateIPNS function working on my node; now that it's working it might not be necessary but it does provide a fixed point in space. the command to add the file and then publish it is: this.ipfs.add([new buffer.Buffer(index)])
<MikeFair>
Now I haven't seen how to provide a keyfile to update a separate ipns address dedicated to the user
<MikeFair>
like I can on the command line
<MikeFair>
I was able to import this into the DAG but it didn't work like I expected
cemerick has quit [Ping timeout: 240 seconds]
<MikeFair>
I think I know what needs to be done; but I think just working with files is fine for now
<MikeFair>
If added a "data" node to this object with the paste text; then taught your pastebin to load this file as a JSON object and select the data field; then I think you'd have a winner
<MikeFair>
You'd have the "history" references; and the "page data" referneces
<MikeFair>
So far I have not gotten ipfs js api to retrieve a file via ipns
<AphelionZ>
MikeFair: your last ipfs get isnt working
<MikeFair>
Hmm I see some errors on ipfs daemon; I'm going to kick it
<MikeFair>
AphelionZ: ok, the original address is updated now
ntzor has joined #ipfs
gmoro_ has quit [Ping timeout: 240 seconds]
<ntzor>
Are there any plans for TOR integration ?
<MikeFair>
ntzor: I can't say; I'm only responding so the crickets don't chirp too long; but tbh, I'm not exactly certain what TOR integration would look like
gmoro has joined #ipfs
<MikeFair>
I know what TOR is; just not sure what the "integration" would be
arpu has joined #ipfs
gmoro has quit [Remote host closed the connection]
<MikeFair>
ntzor: You can certain hit ipfs / ipns dns nodes _from TOR_
gmoro has joined #ipfs
<ubiquitous[m]>
Well that explains it. @mikefair you should watch the IPFS videos where compatibility with tor, amongst other things, are explicitly discussed.
<ubiquitous[m]>
Some of your answers the other day didn't make much sense either. Thanks for trying nonetheless, check out the videos. Important we get these sorts of things right. Oh. And if you know of a link that describes how to create a permanent ID for a dynamic content group (ideally decentralised also) awesome.
<MikeFair>
ubiquitous[m]: I've been reading a lot of the GitHub docs; and do my best to tell ppl I'm not an expert here; and ask ppl to correct if whatever I say is off ; but I've found the channel tends to remain quiet for too long unless I say something that triggers them to respond
<MikeFair>
:)
<MikeFair>
As for making a permanent ID to a dynamic content group; are you thinking something other than IPNS?
<MikeFair>
ipfs key gen nickname ; ipfs add [directoryofnewdata] ; ipfs name publish -k nickname hashofnewdatadirectory
<MikeFair>
(oh and recursive on the add)
<MikeFair>
ubiquitous[m]: I am very curious about what answers you saw that weren't making sense though :)
<A124>
Is there tool/package to get ipfs hash of a file and that's it?
<MikeFair>
ipfs file stat --hash /local/file # if it's already in your local "files" bin
<MikeFair>
A124: or do you mean without adding it?
<A124>
Yeah, without adding and being as small as possible.
<MikeFair>
correction: ipfs files stat --hash /local/file # "files" the s is important ;)
<MikeFair>
A124: I don't know of a way because the file has to be chunked out before it can addressed
<MikeFair>
A124: the ipfs javascript library is about 1.5MB
<MikeFair>
A124: should work with node.js to call its functions
<afdudley>
is anyone working on IANA stuff?
tmg has quit [Ping timeout: 260 seconds]
<MikeFair>
A124: There might be something but I don't know it ; [aside from ripping apart go-ipfs to strip out the add routine without actually writing anything]
<lgierth>
afdudley: what's the iana stuff?
<kpcyrd>
MikeFair: I'm not sure you can do TXT lookups from within tor
<MikeFair>
kpcyrd: I was thinking sites like ipfs.io
<lgierth>
that's cool but not what afdudley was asking about
<MikeFair>
afdudley: Sure IPFS as DNS Resolver Cache
<MikeFair>
afdudley: So that's currently an IPFS file
<lgierth>
(i think)
<MikeFair>
afdudley: And documentation of domain authority records
<afdudley>
lgierth: is right, i'm talking more about adding blockchains to the OSI stack.
<lgierth>
people have been adding all kinds of blockchain-ish data to ipfs and moving it around, ethereum, zcash, git
<MikeFair>
afdudley: you say blockchains I think distributed *coin ledger entries
<afdudley>
and exposing all the different *sec protocols to the blockchain and from there, the end user.
__uguu__ has quit [Ping timeout: 268 seconds]
<lgierth>
kumavis has been working on ethereum-on-ipfs and whyrusleeping has worked with zcash data
<afdudley>
Yeah, i've talked to whyrusleeping in the past :D
<MikeFair>
where's the link between those and IANA?
<lgierth>
cool
<lgierth>
be aware that you often don't even need registries or authorities
<lgierth>
(blockchains are another kind of authority)
<MikeFair>
lgierth: sure; got it
<lgierth>
e.g. for securing internet traffic, you don't need certificate authorities as in IPSEC or DNSSEC
<MikeFair>
lgierth: I do like "curated spaces"
<afdudley>
lgierth: unpack that a bit.
<afdudley>
please :D
<MikeFair>
lgierth: AFAIK DNSSEC isn't about security; it's about authrization
<lgierth>
check out cjdns for example, it forms a p2p ipv6 network, completely secure and all, but without any IPSEC or IANA in there. your ipv6 address is the hash of your pubkey (= no need for address authority), and that pubkey is at the same time used to encrypt/sign packets (= no need for cert authorities) -- nodes just authenticate themselves
<ubiquitous[m]>
Dnssec is about security. Prevents DNS spoofing.
<MikeFair>
ubiquitous[m]: Spoofing is about updating privilege/authority
<afdudley>
lgierth: ah, yeah, i'm aware of these projects. I think i'm coming at this from "the other side"
<MikeFair>
ubiquitous[m]: It's "Authorizing" who can make an update; I agree security is in the name; but it's not the same as security as in privacy or encypting data transmissions
<afdudley>
like "how can infromation from BGPSEC (for example) help secure a blockchain?"
<MikeFair>
ubiquitous[m]: It's not trying to lock down data distribution
<lgierth>
it's easy to move IP traffic over libp2p connections
<lgierth>
and gain all these nice properties
<lgierth>
then eat the internet one AS peering at a time
<afdudley>
yeah, so this exactly the discussion i'm trying to avoid :D
<afdudley>
yeah, I don't think network operators will like that idea.
<afdudley>
(to put it politely)
<lgierth>
we need new protocols with cryptography as a core part of the design, not just somehow bolted on top as with IPSEC and DNSSEC
<lgierth>
oh god BGPSEC *is* a thing
<afdudley>
I half agree with that.
<lgierth>
network operators will be delighted how nicely it'll interoperate
<afdudley>
Do you know any network operators?
<afdudley>
:D
<lgierth>
but yeah word, i've had developers of more traditional routing protocols yell at me
<afdudley>
Yes, I would expect them to. regardless of if I agree with them or not.
<Kubuxu>
lgierth: I am very sad about IPSEC and DNSSEc
<lgierth>
SAD!
<Kubuxu>
both those protocols are essentially useless
<Kubuxu>
DNESEC is controlled by US gov
<lgierth>
wait you reminded me of something
<MikeFair>
lgierth: I'll bet; do you calm them down ipfs is more a layer 5 protocol
<Kubuxu>
and IPSEC was lobbied to be as insecure as possible
<lgierth>
MikeFair: no, i'm after all layers
<lgierth>
anyhow, gotta work
<lgierth>
writing specs is more important than ranting :D
<afdudley>
lgierth: enjoy :D
<MikeFair>
Any working on/proposing marking file blocks as temporary with a ttl like ipns registrations are?
<lgierth>
blocks are inherently temporary until you pin hem
ulrichard has quit [Remote host closed the connection]
<MikeFair>
lgierth: I was thinking of some kind of marking that would give these blocks a prefernce for purging
<MikeFair>
lgierth: after timeout
<lgierth>
do it in your datastructure
<MikeFair>
lgierth: I'm expecting to generate a lot of intermediate files during a users web session
<MikeFair>
lgierth: I gues I don't follow what you mean there
<MikeFair>
For example; I plan on keeping some encrypted passkeys out there in a file; when this file changes; I'll unpin the current one and pin the updated one
ylp1 has quit [Quit: Leaving.]
<MikeFair>
I'm not asking to actually purge the blocks; but it'd be great to say "hey nodes, if you'd like to reclaim some space; take this one first"
<MikeFair>
I figured I'd do that by readding it with a new TTL marker like an ipns registration has
<MikeFair>
How do I "do that in the datastructure"?
jkilpatr has quit [Ping timeout: 260 seconds]
Aranjedeath has joined #ipfs
kenshyx has quit [Remote host closed the connection]
shizy has joined #ipfs
<sprint-helper1>
The next event "IPFS All Hands Call" is in 15 minutes.
<victorbjelkholm>
nicolagreco: did you write down your notes about the forum things we discussed? If so, I cannot find it anywhere
espadrine has quit [Ping timeout: 260 seconds]
s_kunk has quit [Ping timeout: 240 seconds]
pfrazee has joined #ipfs
maciejh has quit [Ping timeout: 268 seconds]
chriscool has quit [Ping timeout: 260 seconds]
<dryajov>
daviddias: dignifiedquire: whyareyousleeping: lguierth: is there any work happening on the relay part? is that something you guys need help with? I was looking for something I can tackle... also if there is something else I can help with, let me know
Guest5348 has quit [Read error: Connection reset by peer]
<Mateon1>
Came to IRC to ask if anybody's interested in debug dumps of a node that is currently using 7.9 gigs of ram
<Mateon1>
7.49 gigs*
suttonwilliamd has quit [Read error: Connection reset by peer]
suttonwilliamd has joined #ipfs
suttonwilliamd has quit [Read error: Connection reset by peer]
suttonwilliamd has joined #ipfs
suttonwilliamd has quit [Client Quit]
<whyrusleeping>
Mateon1: yeah, i'd take a look at them :)
suttonwilliamd has joined #ipfs
<Mateon1>
Cool, right now taking them
ntzor has quit [Quit: Leaving]
caiogondim has joined #ipfs
<Mateon1>
Oh, I'm still running the prerelease
<whyrusleeping>
eh, still worth sending
<Mateon1>
The mem usage grew from 5 gigs to 7.5 in about 30 minutes while downloading a large dataset (peermaps)
<Mateon1>
QmXYBuXSH6qiyo6y4s4FLeU4x9UkntrGmA4p5z5WAn6y4c Get it while it's hot :)
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 240 seconds]
jkilpatr has joined #ipfs
matoro has quit [Ping timeout: 260 seconds]
JayCarpenter has joined #ipfs
Encrypt has joined #ipfs
arkimedes has joined #ipfs
<Mateon1>
On another note, it's very likely that the republisher is stuck in that dump
arkimedes has quit [Ping timeout: 240 seconds]
mildred4 has joined #ipfs
arkimedes has joined #ipfs
Encrypt has quit [Quit: Quit]
espadrine has joined #ipfs
maxlath has quit [Ping timeout: 255 seconds]
aquentson has joined #ipfs
aquentson1 has quit [Ping timeout: 240 seconds]
aquentson1 has joined #ipfs
ylp has quit [Ping timeout: 255 seconds]
ylp has joined #ipfs
aquentson has quit [Ping timeout: 260 seconds]
matoro has joined #ipfs
ylp has quit [Ping timeout: 255 seconds]
maxlath has joined #ipfs
DiCE1904 has quit [Read error: Connection reset by peer]
ylp has joined #ipfs
DiCE1904 has joined #ipfs
matoro has quit [Ping timeout: 260 seconds]
gully-foyle has joined #ipfs
JayCarpenter has quit [Ping timeout: 260 seconds]
ylp has quit [Ping timeout: 255 seconds]
seharder has quit [Read error: Connection reset by peer]
ygrek has joined #ipfs
ShalokShalom_ has joined #ipfs
ShalokShalom has quit [Read error: Connection reset by peer]
ShalokShalom_ is now known as ShalokShalom
ylp has joined #ipfs
<herzmeister[m]>
sooooo if i use ipfs on a btrfs volume will that de-duplicate added data effectively? or is that a naive assumption?
<whyrusleeping>
herzmeister[m]: ipfs already does dedupe on its own
<whyrusleeping>
i'm not sure if btrfs dedupe would improve upon that at all
<herzmeister[m]>
if that were the case my .ipfs folder wouldn't contain gbs of data
robattila256 has joined #ipfs
arkimedes has quit [Ping timeout: 240 seconds]
<herzmeister[m]>
i dont mean the deduplication that is done internally, i mean the actual data that i share via ipfs; it is practically duplicated into the .ipfs folder
muvlon has quit [Quit: Leaving]
<frood>
you're asking if there's a way to ipfs add file.txt, and have ipfs serve it from its location in the filesystem, rather than make blocks in .ipfs?
<whyrusleeping>
herzmeister[m]: ah, i see
<whyrusleeping>
yeah, a btrfs dedupe might work for that
<whyrusleeping>
I've actually never tried
<whyrusleeping>
despite using btrfs *and* zfs on various machines of mine..
Catriona has quit [Remote host closed the connection]
<herzmeister[m]>
depends on btrfs actually detecting the data is the same; dont know what the constraints for that are; and of course works only if ipfs doesnt scramble the data somehow
ylp has quit [Ping timeout: 255 seconds]
jkilpatr has quit [Ping timeout: 240 seconds]
<lgierth>
it adds a thin wrapper around each 256K block
<lgierth>
unless using --raw-leaves
Encrypt has joined #ipfs
<herzmeister[m]>
that indeed might break btrfs's deduplication
<herzmeister[m]>
whats the downside of using --raw-leaves?
<frood>
herzmeister: nobody likes salad.
<herzmeister[m]>
i see ;)
<whyrusleeping>
herzmeister[m]: no downside, its just a newer feature
<whyrusleeping>
and older clients won't be able to handle data you create with it
<whyrusleeping>
(i guess thats a downside)
<herzmeister[m]>
ah good then... dont give a shit, i'm not inclusive of old clients
suttonwilliamd has quit [Read error: Connection reset by peer]
suttonwilliamd has joined #ipfs
suttonwilliamd has quit [Remote host closed the connection]
suttonwilliamd has joined #ipfs
__uguu__ has quit [Ping timeout: 240 seconds]
rendar has quit [Ping timeout: 255 seconds]
<whyrusleeping>
lol
jkilpatr has joined #ipfs
__uguu__ has joined #ipfs
<MikeFair>
whyrusleeping: Are there any documents that describe how an IPNS node gets it link updated without changing it's address?
maciejh has joined #ipfs
<MikeFair>
I'd just like to understand how/why they are different from everything else that changes whenever their "contents" are changed
maciejh has quit [Remote host closed the connection]
maciej_ has joined #ipfs
s_kunk has joined #ipfs
maciej_ has quit [Ping timeout: 268 seconds]
<caiogondim>
Hello folks. Any good examples on using IPFS to exchange data between peers on a browser?
<cblgh>
i just watched a talk on orbit which said that you could drag & drop things into orbit and share through there
<frood>
works fine for file-sharing and serving static content to browsers via a local daemon
<whyrusleeping>
works fine for use as a package management tool for the whole go-ipfs codebase ;)
rcat has joined #ipfs
<whyrusleeping>
also seems to work pretty well for peer to peer chat
<whyrusleeping>
though we're still working on getting latencies where we want them for that. doing things in the browser is hard
gmoro has joined #ipfs
<__uguu__>
for ipfs to be more useful for sharing large files vs bittorrent it needs anonymity as the default setting tbh. for serving static content like stylsheets or source code it is pretty decent at the moment.
<herzmeister[m]>
bittorrent is not anonymous either
<__uguu__>
that is my point
<__uguu__>
at the moment, ipfs is effectively a slightly more useful bittorrent but not enough to make it obsolete.
<herzmeister[m]>
ipfs is modular, so someone™ will write an onion- or turtle-routing extension for it
<__uguu__>
if someone™ does will it be merged into the mainline code base?
<__uguu__>
and if so, will it be the default setting for non servers?
<__uguu__>
i.e. short lived clients
<AniSkywalker>
__uguu__ onion routing has serious performance implications
<__uguu__>
tor is faster than people think
<__uguu__>
onion routing has improved a lot since 2006
<AniSkywalker>
I use it consistently
<AniSkywalker>
I know it's nowhere near the low latency people expect
<AniSkywalker>
Imagine someone in your house has a file, but you're routing through Tor. You have effectively undone any benefit in latency IFPS gave you.
<__uguu__>
when I used tor when i had RCN as my ISP it worked a little faster vs not
<lgierth>
anonymity is overrated
<lgierth>
very very important
<AniSkywalker>
^
<AniSkywalker>
SSL != anonymous
<lgierth>
but not important enough to be on-by-default
<__uguu__>
that is very disapointing
<__uguu__>
anonymity is pointless if it's not the default setting
tmg has joined #ipfs
<lgierth>
we want to eventually have existing onion transport as part of go-ipfs, so you can flip a switch in your config
<AniSkywalker>
__uguu__ how so?
<AniSkywalker>
Tor is not the default browser. I think it's useful though.
<herzmeister[m]>
probably will not be mainline IPFS because more complex, in many scenarios not needed. but that someone could easily provide a forked TIPFS project or the like
<__uguu__>
let's think for a second, what if the internet was built to have anonymity as the default ?
<AniSkywalker>
That's not possible.
<lgierth>
herzmeister[m]: no, we want to have it in ipfs itself, behind a config option
<__uguu__>
for you
<AniSkywalker>
No, like the foundation of a bunch of sites is identity.
<AniSkywalker>
For example, Google makes their revenue via ads. Anonymous == no ads == no Google.
<__uguu__>
identity does not require your state issued id
<dyce[m]>
does ipfs work in whonix
<lgierth>
-me back to work
infinity0 has joined #ipfs
<__uguu__>
I doubt anyone truly wants a world with google in full control
<AniSkywalker>
Banks need you to log in. Anonymous == no online banks == ??? == probably save more money but besides the point
<AniSkywalker>
__uguu__ Google will never be in full control.
<AniSkywalker>
Free market theory. But that's way besides the point.
<kevina>
I am trying to figure out how to get errors back while allowing the GC to continue if possible
<frood>
I may be wrong, those asm files may be output
<kevina>
whyrusleeping: by self-requested a review does that mean you are looking at it now?
<kevina>
or putting on the queue for later?
<whyrusleeping>
kevina: looking at it quickly now, but will provide full review sometime later today
<kevina>
a full review will be a bit premature, still too much a WIP
<whyrusleeping>
caiogondim: The link from earlier didnt actually contain what i was hoping it would, Heres the thing i was meaning to send: https://kumavis.github.io/metamask-mesh/
gmoro has quit [Ping timeout: 240 seconds]
matoro has quit [Ping timeout: 255 seconds]
jose12[m] has joined #ipfs
cdetrio[m] has quit [Ping timeout: 245 seconds]
matoro has joined #ipfs
<whyrusleeping>
kevina: hrm... collecting errors that way is kinda confusing
<whyrusleeping>
what is your goal?
<whyrusleeping>
I think that its "enumerate all blocks, and report any errors encountered along the way" right?
<kevina>
whyrusleeping: Yes.
<kevina>
but that will require an API change
AniSkywalker has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<kevina>
I plan to push the collection as far up as possible without changing the API
cdetrio[m] has joined #ipfs
<kevina>
I am in the middle of eliminating the UnsafeToContinueError and will just send each error to the error channel
<whyrusleeping>
Hrm, okay
<whyrusleeping>
Using a channel seems like the right way to go
<whyrusleeping>
I would make sure to associate each error with the operation it came from
<kevina>
will do, also make sure the key that caused the problem is part of the error