<noffle>
pinned from my daemon on airport wifi -- nice!
seagreen has joined #ipfs
jkilpatr has quit [Ping timeout: 245 seconds]
TUSF has quit [Ping timeout: 272 seconds]
dignifiedquire has quit [Quit: Connection closed for inactivity]
TUSF has joined #ipfs
chris613 has joined #ipfs
chris613 has left #ipfs [#ipfs]
shizy has joined #ipfs
jsgrant_om has joined #ipfs
jsgrant_om has left #ipfs [#ipfs]
katamori has joined #ipfs
jsgrant_om has joined #ipfs
<katamori>
hello, is anyone available currently?
<crankylinuxuser>
im not a dev, but know pretty well about IPFS. whats up?
<katamori>
ah cool, very small non-technical question here
<katamori>
I uploaded some files to the network that I want to "spread" to be available for others when I'm off
<crankylinuxuser>
sure, fire away
<katamori>
is there any "central repository" where I can put my hashes for people to show and check out?
<crankylinuxuser>
Not easily, no.
<katamori>
:/ as a matter of fact, a list of IPFS pages and valid hashed would be awesome on its own
<katamori>
*hashes
<crankylinuxuser>
However, with IPNS , you can link a domain name to a node hash, which can point to a HTML file of all the hashes
<crankylinuxuser>
this is some of the same problem of the early web, prior to search engines.
<crankylinuxuser>
But, some things that may help. If you have a website (even a free wordpress one), you can provide IPFS links there and they'd just work.
<katamori>
oh yes, I've been talking about it with someone in the subreddit
<crankylinuxuser>
That's probably the easiest. The next one, is if you have your own VPS and domain.
<katamori>
oh so you mean listing my content on an own web page, instead of finding another repo for it
<crankylinuxuser>
Then you can use IPNS with a TXT record pointing at the IPFS node hash. Browsers that are aware can query this and hit IPFS before hitting your server.
<crankylinuxuser>
For the most part, you're using your DNS as a bootstrap to get on the IPFS network. From there, you can add new IPFS content and use the IPNS pointer to point to new sets of stuff.
<katamori>
I mean, that's not even that hard: make a list, ipfs add, and IIRC, ipfs public is the command for IPNS "binding"?
<crankylinuxuser>
Nope. The IPNS commands are "ipfs name [stuff]"
<crankylinuxuser>
ipfs name publish <ipfs-path> - Publish an object to IPNS.
<crankylinuxuser>
ipfs name resolve [<name>] - Gets the value currently published at an IPNS name.
<katamori>
oh, almost
<crankylinuxuser>
And if I recall correctly, you have to do this every 12 hours or the IPNS loses its binding.
<crankylinuxuser>
But the IPFS is immutable. Once set, there's no changing it ever.
<katamori>
yup, I just realized after uploading a folder the first time
<katamori>
turned out it's a bit different from what I expected early on, but interesting nonetheless
<crankylinuxuser>
Oh it is. But think of it as all hashes exist. Just you have to uncover the content that equates the hash.
<crankylinuxuser>
Its not like you're polluting the namespace. Its already there. We just don't know what a new hash points to, until it's been found :)
<katamori>
heh, this idea reminds me of the guy who generated a gigantic set of alphabets that contains every possible word combination of the english language
<katamori>
or more simply, the idea that the statue is in the marble, you just have to carve out
<crankylinuxuser>
exactly
jaboja has quit [Ping timeout: 268 seconds]
<crankylinuxuser>
The neat area you can do now, is have 100% client side webapps with no server
<katamori>
wait, how many hash possibilities are there at all? not because of collisions, but I wonder if we can run out of files
<crankylinuxuser>
The developers have already considered that. There's something like 2^320 hashes available. And the protocol can extend to other hash formats (thats what the Qm at the beginning is for)
<whyrusleeping>
2^256 with sha256
<crankylinuxuser>
hmm.. where did I get 2^320 then ? thanks for the correction whyrusleeping :)
<katamori>
yeah, I was sure about the developers taking care of it, I was just interested in the scope
<whyrusleeping>
anytime :)
<katamori>
but yeah, with exponents of hundreds, it's waaaaaaay fine
<katamori>
also that Atari site is daaaaaaaaaaaaaaaaaaaaamn fine, pinned immediately
Bhootrk_ has joined #ipfs
<crankylinuxuser>
heh. I also converted a few projects as well. Hit me up at jwcrawley@gmail.com and I can get the projects updated and uploaded.
<katamori>
not sure I'll memorize the mail address but since you apparently linked your github to the page, I'll surely find you later if I want :)
<katamori>
but...if you have something similar with Doom...I'm into it
<tangent128>
crankylinuxuser, katamori: You can even have 100% client side webapps that persist their state to be accessible from other devices.
<tangent128>
Though until somebody gets IPNS pinning better supported, it's not going to be reliable yet.
<katamori>
yes, I'm wondering on it right now...because someone visiting your ipfs site is not a guarantee that he'll also pin it, right?
<katamori>
and if he doesn't pin it, he won't store it local for long
<katamori>
right?
<tangent128>
Right, just that the files he touches will be cached for an indeterminate time
spacebar_ has joined #ipfs
<tangent128>
And you can't really pin IPNS yet; you can pin the content IPNS points to, but it won't auto-pin the new content when the pointer changes, and as far as I know there's no way for peers that don't have the private key to re-publish an entry they are interested in.
<katamori>
tangent128: that's a really strong bottleneck
<katamori>
well, since hashes are permanents, it's still somewhat more stable that link rots
<crankylinuxuser>
Its still a project being worked on. It'll be rough around the edges. Especially in the area Im interested in, of message queuing.
<katamori>
that's just my 2 cents, but I have this obsession with being afraid for certain internet stuff to be gone forever, and it was a VERY HUGE motivator for me to check out ipfs
<crankylinuxuser>
That's an alpha feature that you have to turn on with a flag.. But it's like MQTT over IPFS
<tangent128>
You could have a script on a node set up to poll IPNS and update a pin, but it's not built-in functionality.
<tangent128>
MQTT?
<crankylinuxuser>
or a cronjob
<tangent128>
Well, cron runs a script, yeah
<crankylinuxuser>
Yeah. If you do anything in IoT, MQTT is a message queuing protocol. IPFS now supports something like that. It's franky awesome.
<katamori>
oh speaking of which, n00b question: I was told that I actually support ipfs ecosystem even by simply running the daemon
<crankylinuxuser>
Er, yes.
<tangent128>
You're part of the DHT, even if you cache nothing, right?
<katamori>
can you explain why? I noticed it generated data traffic even when I'm not directly accessing data
<crankylinuxuser>
Unless you add content, you wont provide content. However your node is used in network probe messages, as far as I understand
<katamori>
probe messages? so for routing?
<crankylinuxuser>
creation and maintenance of the network takes messages to explain the topology. Costs some bandwidth.
<crankylinuxuser>
That's a good way of putting it. Think of a dynamic, learning BGP with no central nodes.
ckwaldon has quit [Remote host closed the connection]
<katamori>
not sure if I got it: in terms of routing, it's the same as the internet is today (especially because both is mostly based on TCP) but the ipfs nodes are more functional, right? because there's no client-server approach
<tangent128>
The swarm needs an index so that it can find which node has a given hash.
<tangent128>
Every connected node takes part in providing the index.
<crankylinuxuser>
Also when someone requests a hash, that message jumps through the network. Asking machines also costs some small amount of bandwidth.
<crankylinuxuser>
im going to head on out. very sleepy. have a good one ;)
crankylinuxuser is now known as cranky-sleep
<katamori>
goodn8
<katamori>
I'm going too
katamori has quit [Quit: Page closed]
<tangent128>
I don't know if anybody awake knows, but is there any tooling yet, official or otherwise, to republish an IPNS entry you didn't write?
Bhootrk_ has quit [Quit: Leaving]
Bhootrk_ has joined #ipfs
shizy has quit [Ping timeout: 255 seconds]
spacebar_ has quit [Quit: spacebar_ pressed ESC]
chungy has quit [Ping timeout: 260 seconds]
ianopolous has quit [Ping timeout: 260 seconds]
Neur0 has quit [Quit: Deebidappidoodah!]
neurrowcat has joined #ipfs
ryantm has joined #ipfs
neurrowcat has quit [Client Quit]
neurrowcat has joined #ipfs
gmoro_ has quit [Ping timeout: 268 seconds]
Bhootrk_ has quit [Quit: Leaving]
<achin>
as far as i know, no such thing exists
Bhootrk_ has joined #ipfs
palkeo has joined #ipfs
palkeo has joined #ipfs
palkeo has quit [Changing host]
Bhootrk_ has quit [Quit: Leaving]
Bhootrk_ has joined #ipfs
neurrowcat has quit [Ping timeout: 268 seconds]
gmcabrita has quit [Quit: Connection closed for inactivity]
chungy has joined #ipfs
<kiboneu>
:q
<rozie>
hi. I have a question about censorship prevention with IPFS
<rozie>
I found https://ipfs.io/legal/ and there is Report Copyright Infringement on the IPFS Gateway Service
<rozie>
(how) does it work?
Mateon3 has joined #ipfs
<engdesart>
I would infer that you report the copyright infringement, and the devs will add it to the optional block list.
Mateon1 has quit [Ping timeout: 255 seconds]
Mateon3 is now known as Mateon1
<engdesart>
There's an optional list of things that can be blocked, which includes copyright infringement, some pornography, etc.
<rozie>
but possibility of blocking some content is completely against being anti censorship solution
<rozie>
it's just a matter who will decide what should be censored and what not
<rozie>
without going into detail like "what jurisdiction should resource use" (different countries have different copyright laws, what is copyrighted in one country can be copyleft in the other)
Foxcool has joined #ipfs
<rozie>
simple example: if there is a dictator, and someone has compromising movie with him, made - of course - without his consent, if dictator comes to report copyright infringement, would it be censored?
<whyrusleeping>
rozie: its only blocked on the machines that we (protocol labs) run
<whyrusleeping>
Its up to each user to choose to respect the list of content that we choose to block
<whyrusleeping>
tangent128: It shouldnt be too dificult to write such tooling, if youre familiar with go i can point you in the right direction
<tangent128>
whyrusleeping: It's been around five years since I last used go, but I'd definitely be willing to take a stab.
<whyrusleeping>
the put record TTL is set locally when a peer receives a record
<whyrusleeping>
so when i get a record, i set its TTL in my local store for now + 24 hours
dignifiedquire has joined #ipfs
<whyrusleeping>
that record itself might actually be valid for much longer than that
<tangent128>
Ah, so even if I set the TTL on the record to expire in 2046, it still needs rebroadcasting every, say, 12 hours. (but the rebroadcaster doesn't need my key until the expiration)
<whyrusleeping>
exactly
<tangent128>
And records with a later creation date take priority, regardless of expiration time (provided it's valid)
<whyrusleeping>
Yeap, they have a sequence number
<whyrusleeping>
brb, grabbing a snack
cwahlers has quit [Read error: Connection reset by peer]
cwahlers has joined #ipfs
ryantm has quit [Quit: Connection closed for inactivity]
<rozie>
whyrusleeping: thanks for the expalaination
<rozie>
BTW how much data (GB/TB) is available on ipfs
espadrine has joined #ipfs
rendar has joined #ipfs
ecloud_wfh is now known as ecloud
warner has quit [Read error: Connection reset by peer]
warner has joined #ipfs
ygrek has quit [Ping timeout: 240 seconds]
maxlath has joined #ipfs
ylp has joined #ipfs
espadrine has quit [Ping timeout: 260 seconds]
Guest96076 has joined #ipfs
gmoro has joined #ipfs
Guest96076 has quit [Ping timeout: 272 seconds]
espadrine` has joined #ipfs
<Kubuxu>
rozie: If I were to guesstimate somewhere usnder a PT (we ourselves put a lot TB into ipfs), might me more I don't have a other points of view. There might be people also putting data into ipfs.
<Kubuxu>
s/me/be
A124 has quit [Read error: Connection reset by peer]
mahloun has joined #ipfs
A124 has joined #ipfs
sirn has quit [Ping timeout: 255 seconds]
sirn has joined #ipfs
sirdancealot has joined #ipfs
sirn has quit [Ping timeout: 240 seconds]
sirn has joined #ipfs
jungly has joined #ipfs
sirn has quit [Ping timeout: 240 seconds]
sirn has joined #ipfs
sirn has quit [Ping timeout: 240 seconds]
kvda has joined #ipfs
Boomerang has joined #ipfs
ameba23 has quit [Read error: Connection reset by peer]
HoloIRCUser2 has joined #ipfs
Guest96076 has joined #ipfs
aedigix has quit [Remote host closed the connection]
cxl000 has joined #ipfs
aedigix has joined #ipfs
mahloun has quit [Quit: WeeChat 1.7.1]
gmoro has quit [Quit: Leaving]
gmoro has joined #ipfs
aedigix has quit [Remote host closed the connection]
aedigix has joined #ipfs
kantokomi has joined #ipfs
screensaver has joined #ipfs
<kantokomi>
I've been trying out ipfs for a bit now, but I still struggle with understanding how it is not still (de)centralized and rather distributed. When I add a file with ipfs init && ipfs daemon && ipfs add file, it seems that the only way to access it is through ipfs.io/ipfs/$HASH. If I understand correctly: Upon a request ipfs.io fetches the blocks from my computer and then outputs it, so what if ipfs.io
<kantokomi>
fails or is taken down?
<kantokomi>
How long does a file live if I added a file, then accessed it through ipfs.io and then turned off my daemon?
<substack>
kantokomi: somebody else can do `ipfs cat HASH` or `ipfs get HASH`
Yann1 has joined #ipfs
<substack>
and their computer will hop on the ipfs DHT to find peers who have pieces of the content pointed to be HASH
<kantokomi>
I mean I can still access the file by going to localhost, but how can I serve the file to someone on the internet without needing a ipfs.io?
<substack>
or if you like, you can run your own web gateway to mirror ipfs to the legacy web
<substack>
kantokomi: the other person needs to have the ipfs client installed
<kantokomi>
Okay, so as long as I can give someone the hash they will be able to get the content. But what if they don't use the terminal
<substack>
you can teach them? I'm not sure
<substack>
you can also use a browser with built-in support for ipfs or a browser extension
<kantokomi>
substack: Is there any documentation for running a gateway?
<kantokomi>
Does any browsers have this built in yet? Extensions to firefox?
<substack>
the public gateway I think runs the same thing as when you do `ipfs daemon` and can load stuff on localhost:8080
<substack>
I'm not sure exactly what configuration gateway.ipfs.io uses
<kantokomi>
Do I need just need to port-forward in order to be a gateway?
<substack>
you can set which ports to use in ~/.ipfs/config
<kantokomi>
Aha I see. However how long would a file survive on the main gateway if noone requests it and I take my own daemon down?
<kantokomi>
So in order to use this add-on, would I need to have a client daemon running?
<substack>
yes
<kantokomi>
And so if I visit a ipfs hash it will be cached on my daemon right?
<kantokomi>
In my browser that is
<lidel>
kantokomi, in your daemon, yes (and in browser too, content in /ipfs/ is immutable so it is cached for max time)
<kantokomi>
Cool, sorry for all of my stupid questions though :)
<kantokomi>
It also seems like a waste that when someone requests stuff on my node they first go through the gateway, then I send to the gateway and then the gateway can display it
<kantokomi>
Is this supposed to be fixed by everyone running a daemon locally (inside the browser if possible) or something like that?
<lidel>
kantokomi, browser extension detects request to public gateway and if you are running local one it will be used instead,. no request to public gw will be sent
<kantokomi>
Is there a setting for the daemon to stop sharing after 20 GB uploaded or something similar?
<kantokomi>
lidel: However people not running a daemon will have to do what I described, no?
<lidel>
public gateway is a "stopgap" solution that enables you to sent IPFS link to friends that do not have local daemon yet
<kantokomi>
Aha, yes that makes sense
<lidel>
kantokomi, the plan is to have browser extension to run ipfs node, so there is no need for external daemon in your system, but we are not there yet
<kantokomi>
Do you think that eventually the node could be integrated into standard browser functionality?
<kantokomi>
Like out-of-the-box working ipfs
<lidel>
that is a far future, but IMO feasible, if there is a good ecosystem
<kantokomi>
I think this project is really cool so I hope this could become reality. Maybe I should try to learn to program in Go also
sprint-helper1 has quit [Remote host closed the connection]
<kantokomi>
Ah no, I will try that now
sprint-helper has joined #ipfs
HoloIRCUser2 has quit [Ping timeout: 240 seconds]
HoloIRCUser has joined #ipfs
<kantokomi>
lidel: That fixed it, thanks :). The extension worked on local content, but not external e.g. the ipfs links in the blog post about wikipedia
<kantokomi>
My daemon gives me these kinds of errors: ERROR core/serve: ipfs cat /ipns/QmVH1VzGBydSfmNG7rmdDjAeBZ71UVeEahVbNpFQtwZK8W/wiki/Anasayfa.html: no link named "wiki" under QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX gateway_handler.go:525
<lidel>
kantokomi, which version of go-ipfs are you running?
<kantokomi>
lidel: Version 0.4.8
ZarkBit has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<lidel>
cool, i got the same error ;)
ZarkBit has joined #ipfs
treaki has joined #ipfs
<kantokomi>
lidel: It seems like this is not related to the extension (maybe). Running the same command in the terminal produces the same error. Maybe the new RC-releases work?
<lidel>
kantokomi, i checked it under v0.4.9-rc2 and works fine, seems IPNS lookup changed
<lidel>
it worked right away
<kantokomi>
Cool! I will try to upgrade to that one to test :)
<kantokomi>
Am I the only one who finds the FAQ confusing? Does close issues mean that they are solved questions or that they are irrelevant? Are open issues the actual FAQ?
ZarkBit has quit [Quit: Going offline, see ya! (www.adiirc.com)]
ZarkBit has joined #ipfs
treaki has quit [Ping timeout: 264 seconds]
<Kubuxu>
kantokomi: open issues are the FAQ
<Kubuxu>
FAQs with good answers will have 'answered' tag
<kantokomi>
Kubuxu: Ah, thanks. It reminded me more of a QA and not FAQ. Any plans to export the issues with good answers to a normal webpage?
<Kubuxu>
yeah we should do that
<kantokomi>
lidel: I got it working too by building on the master-branch :)
<Kubuxu>
I think we planned it for this week but got busy with something else.
<Kubuxu>
but if not this week then next
ZarkBit has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<kantokomi>
Awesome :-)!
sirdancealot has quit [Ping timeout: 246 seconds]
ZarkBit has joined #ipfs
treaki has joined #ipfs
<kantokomi>
One last stupid question. IPFS is P2P so anyone can see anyones IP-address, right? Does that mean that anyone can log which sites certain IP-addresses visit? Isn't this not very good for privacy? One could not safely put up websites that require logins on IPFS or is that also possible?
<kythyria[m]>
Requiring login is vastly complicated by immutability.
<Kubuxu>
it is possible to work around with Private Networks (already implemented) or content encryption (not built in yet).
<Kubuxu>
but regarding privacy, it really isn't much worst than current internet and in some cases is is much better
<Kubuxu>
after you bring some files from outside in your local network (imagine uni campus) the files spreading in local network won't be visible
<Kubuxu>
for outsiders
<Kubuxu>
that is something we want to have, it is partially case right now
<kantokomi>
Kubuxu: The current internet only ISPs and the host should be able to see that you visited (and maybe NSA)?
<kantokomi>
So the more that uses it, the more privacy one gets or am I understanding wrongly?
<Kubuxu>
you would be able to log if someone is requesting given file from the entity logging
<kantokomi>
So node C can't tell that node A receives a file from node B?
<Kubuxu>
after bitswap sessions are implemented (we have WIP right now), yes.
<Kubuxu>
also in future it will be possible to run ipfs over TOR or other scrambling networks
<brendyyn>
kantokomi: IPFS simply lacks privacy features, which makes it not a good choice to replace most of the internet with unless there was a solution. I'm not sure, but I think even if you use it behind tor, it will give away IP addresses anyway.
<Kubuxu>
brendyyn: that is why we are not recommending running it as TOR service right now
<Kubuxu>
OpenBazzar is working on dedicated TOR transport/build that would include not giving out your IP
<Kubuxu>
from one side it is a bit reduced privacy, from the other it is useful feature. If your ISP can see your request he can fetch the file himself and then send it to you.
<Kubuxu>
it has to transfer those files either way and what ISP gains is powerful cache layer
btmsn has joined #ipfs
<Kubuxu>
brendyyn: so far we were focusing on making IPFS work and be faster, privacy is something we think very strongly. We are just limited by time and amount of code we can write.
gmcabrita has joined #ipfs
<brendyyn>
It's something that concerns me a lot. That caching aspect allows mass surveilence by ISPs.
<Kubuxu>
they already can and do that
<Kubuxu>
and HTTPs doesn't solve that
<lemmi>
ideally we had that part of freenet where every node just stores a bunch of blocks. by themselves they are encrypted and don't carry the necessary key. you also make a certain amount of constant traffic. only when you request a block directly you do that by having another piece of information that carries the key.
<brendyyn>
The fact that the problem already exists does not make it any better.
<Kubuxu>
lemmi: IPFS explicitly decided against that
<lemmi>
Kubuxu: oh?
<Kubuxu>
I know but you can't blame new thing from solving all problems in the problem space
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<Kubuxu>
lemmi: because the moment this key is public you in theory have access to that material and you could be hosting some content that is illegal in your legal domain (country, state).
<brendyyn>
I don't care about blame, I'm just interested in solving problems too. It just looks like IPFS is not paving a way to eventually provide strong privacy protection
<Kubuxu>
> so far we were focusing on making IPFS work and be faster, privacy is something we think very strongly. We are just limited by time and amount of code we can write.
<lemmi>
Kubuxu: i think this kind of reasoning doesn't really work at the moment. i think plausible deniabilty is the term or something. but sure that could change
<lemmi>
as an optional feature i can't see how it could hurt
<Kubuxu>
moment you have to call in plausible deniability you are screwed
<lemmi>
with the right one-time pad i could make anything out of a /dev/urandom dump
<lemmi>
but then again. i'm not a lawyer. not in my country, not in another :)
mildred4 has quit [Read error: Connection reset by peer]
dimitarvp has joined #ipfs
mildred4 has joined #ipfs
<kpcyrd>
lemmi: one-time pads are so unpractical to use, you won't see a serious project using them
jkilpatr has joined #ipfs
<lemmi>
kpcyrd: the point is rather that you can generate enything out of randomness if you want it badly enough
<lemmi>
with otps it's just straight forward
<kpcyrd>
lemmi: that stops being true the moment you notice you want you crypto to be authenticated
<jamesstanley>
you can layer encryption on top of ipfs relatively easily
<jamesstanley>
I don't think ipfs needs to use it for everything
<jamesstanley>
just put your encrypted content in ipfs and distribute the key to your friends
<lemmi>
jamesstanley: i'd rather have the encryption as the fundamental building block
<jamesstanley>
it could even be in-browser decryption, e.g. links like /ipfs/Qmfs.fsf/#!content=Qmfsdfsdfsd&key=...
<jamesstanley>
where the Qmfsfsf file is just an in-browser decryptor, and gets the real content + key from the fragment
<brendyyn>
jamesstanley: except that means no forward secrecy
<jamesstanley>
does freenet solve that?
<jamesstanley>
I don't see how it can
<lemmi>
so blocks are encrypted by default, can be a default key if it's supposed to be public.
<jamesstanley>
it's static content; if you decrypt it once, you've decrypted it forever
<jamesstanley>
lemmi: if there's a "default" key, that can easily just be "no key" and you've got the same as ipfs
<jamesstanley>
all that's required to handle encrypted content is a (very thin!) ui layer
<lemmi>
nope. the difference is when you take a hard drive and can only read garbage or 256k blocks of cleartext
<jamesstanley>
if the key is public you can read the "garbage" anyway
<lemmi>
no
<lemmi>
only when you actually are able to reconstruct the repo.
<lemmi>
and thats way more of a hassle than just reading bytes of a disk and searching for stuff that doesn't look too random
<jamesstanley>
fwif, I intend to build an in-browser encryption/decryption system, for ipfs, like the one I described above
<jamesstanley>
*fwiw
<lemmi>
jamesstanley: i attempted it last week and stopped because javascript can't stream files to a download window
<lemmi>
so the filesize is limited
<jamesstanley>
if you're interested in text or html, you can render it in the browser; if you're interested in images you can load those using canvas and then users can right click to download
<lemmi>
otherwise it's about 50-100 lines of code
<jamesstanley>
arbitrary data a bit more complicated, I agree
<lemmi>
i wasn't really interested anymore at that point. the filesize is only half of what you can fit in your ram. that can be very little.
<jamesstanley>
there could be a compatible cli tool to do the same crypto, but store straight to disk
<lemmi>
on the cli i just used encfs --reverse
<lemmi>
or gpg
<lemmi>
just a bunch of shellscripts around these things
<lemmi>
i know that stuff works on top of ipfs. that's fine. but it's not seamless that way.
<jamesstanley>
if freenet is a system layered on top of the internet, I don't see why it couldn't work just as well as a system layered on top of ipfs, and be a lot simpler
<lemmi>
another option i contemplated is to have a encrypted filestore. that would lift some issues
<lemmi>
jamesstanley: ideally you'd provide ipfs with a key, or put the key into the metadata of a directory. everything below that directory should be encrypted then, but still useable as the rest. that would make it less dangerous to setup pin servers as the can only pin encrypted stuff and serve is, those who have the key can use everything as is and have the benefit of a better network
<lemmi>
without risking the pinserver
<SchrodingersScat>
whyrusleeping: they closed it :(
<lemmi>
but anyways. i don't expect much movement in either direction. other issues are more important.
<SchrodingersScat>
whyrusleeping: oh, wait, i'm stupid, that was something else, disregard.
<jamesstanley>
weird weather in UK atm, it's like monsoon rains
xelra has joined #ipfs
Yann1 has quit [Read error: Connection reset by peer]
Yann2 has joined #ipfs
Yann2 is now known as Guest82817
lindybrits has joined #ipfs
<lindybrits>
Hi all. With ipfs 0.23.0 I'm struggling to use ipfs.object.get(hash, () => {}) with normal base58 encoded string -> I can only get the buffer as input to return my object. Thoughts?
<lindybrits>
had to add {enc: "base58"}
lindybrits has quit [Quit: Page closed]
ashark has joined #ipfs
pratch has quit [Remote host closed the connection]
guardianx has joined #ipfs
pratch has joined #ipfs
pratch has quit [Max SendQ exceeded]
pratch has joined #ipfs
HoloIRCUser has quit [Read error: Connection reset by peer]
maxlath1 has joined #ipfs
HoloIRCUser has joined #ipfs
maxlath has quit [Ping timeout: 260 seconds]
maxlath1 is now known as maxlath
shizy has joined #ipfs
HoloIRCUser2 has joined #ipfs
HoloIRCUser has quit [Read error: Connection reset by peer]
SuprDewd has quit [Ping timeout: 240 seconds]
HoloIRCUser2 has quit [Read error: Connection reset by peer]
HoloIRCUser has joined #ipfs
sirdancealot has joined #ipfs
jleon has joined #ipfs
guardianx has quit [Remote host closed the connection]
bauruine has quit [Read error: Connection reset by peer]
bauruine has joined #ipfs
ylp has quit [Quit: Leaving.]
btmsn has quit [Ping timeout: 240 seconds]
john2 has joined #ipfs
Guest96076 has quit [Ping timeout: 240 seconds]
sirdancealot has joined #ipfs
pratch has quit [Remote host closed the connection]
pratch has joined #ipfs
chungy has quit [Ping timeout: 272 seconds]
pratch has quit [Max SendQ exceeded]
pratch has joined #ipfs
john2 has quit [Ping timeout: 245 seconds]
neurrowcat has quit [Quit: Deebidappidoodah!]
ZarkBit_ has joined #ipfs
dimitarvp` has joined #ipfs
Caterpillar2 has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
ZarkBit has quit [Ping timeout: 240 seconds]
ZarkBit__ has joined #ipfs
ZarkBit__ is now known as ZarkBit
dimitarvp has quit [Ping timeout: 240 seconds]
fzzzr has quit [Quit: WeeChat 1.7.1]
fzzzr has joined #ipfs
ZarkBit_ has quit [Ping timeout: 272 seconds]
SuprDewd has joined #ipfs
sirdancealot has quit [Ping timeout: 240 seconds]
ZarkBit has quit [Ping timeout: 240 seconds]
fzzzr has quit [Ping timeout: 260 seconds]
fzzzr has joined #ipfs
nannal has quit [Remote host closed the connection]
Tallgeese has joined #ipfs
A124 has joined #ipfs
<tangent128>
lemmi: how does Mega manage it? I'm pretty sure I've seen their site decrypt large files to download in pure JS.
<lemmi>
tangent128: i don't know exactly. never had a closer look at mega. but it'd be interesting whether they had that limitation or how they circumvented it
<tangent128>
All I really know is their links have a decryption key in the URL fragment, so the server never knows the key. But that's presumably what you're going for.
<lemmi>
tangent128: yep, that's what i attempted until i got annoyed about the web again :)
<tangent128>
Javascript tears me apart. It's a technical nightmare to program in, but the browser platform is increasingly the most reliable way to do cool cross-platform things people will actually use.
<lemmi>
exactly :)
<lemmi>
i have high hopes for webasm. i really hope they don't blow it
fzzzr_ has joined #ipfs
<tangent128>
Until then, strict-mode Typescript fixes most of the pain points, at least. (module loading aside)
<lemmi>
until then i take other jobs.
fzzzr has quit [Ping timeout: 260 seconds]
<tangent128>
We all have our ways of coping I guess :P
jokoon has joined #ipfs
ygrek has joined #ipfs
Tallgeese_ has joined #ipfs
Tallgeese has quit [Ping timeout: 268 seconds]
palkeo has quit [Ping timeout: 260 seconds]
john2 has joined #ipfs
<tangent128>
jamesstanley, lemmi: only, it seems to be using https://www.w3.org/TR/file-system-api/ to not choke on large files, though it looks like it's Chrome-only
<jamesstanley>
I think there's value in making something work on text-only and then expanding from there; that's what I intend to do
<lemmi>
tangent128: ah the file api i found, but that's read-only
<jamesstanley>
totally agree on "Javascript tears me apart"
<SchrodingersScat>
lemmi: how does tahoe-lafs do it?
galois_dmz has quit [Ping timeout: 272 seconds]
TUSF has joined #ipfs
<lemmi>
SchrodingersScat: no idea, i'm not familiar with tahoe-lafs. but judging from the small picture on their website, it could just be the gateway serving the file via https
HoloIRCUser has quit [Read error: Connection reset by peer]
HoloIRCUser has joined #ipfs
jungly has quit [Remote host closed the connection]
joelburget has joined #ipfs
jkilpatr has quit [Ping timeout: 272 seconds]
joelburget has quit [Quit: joelburget]
jkilpatr has joined #ipfs
Encrypt has joined #ipfs
Encrypt has quit [Client Quit]
jokoon has quit [Quit: Leaving]
espadrine` has quit [Ping timeout: 260 seconds]
Boomerang has quit [Quit: Lost terminal]
ShalokShalom has quit [Ping timeout: 268 seconds]
ashark has joined #ipfs
ShalokShalom has joined #ipfs
fzzzr_ has quit [Ping timeout: 240 seconds]
sirn has joined #ipfs
nocent[m] has joined #ipfs
fzzzr_ has joined #ipfs
aeschylus has joined #ipfs
maxlath has quit [Ping timeout: 272 seconds]
btmsn has joined #ipfs
spossiba has quit [Quit: Lost terminal]
spossiba has joined #ipfs
ZarkBit has joined #ipfs
espadrine has joined #ipfs
<charlienyc[m]>
jamesstanley: that's what signal did
<charlienyc[m]>
Re: text only and expand. Seems to have worked out for them
taaem has quit [Quit: Leaving]
galois_d_ has quit [Remote host closed the connection]
<charlienyc[m]>
CMD run there was dig +trace tr.wikipedia.org
<charlienyc[m]>
> logs
jmill has joined #ipfs
<charlienyc[m]>
">logs"
Foxcool has quit [Ping timeout: 255 seconds]
ashark has joined #ipfs
SalanderLives has joined #ipfs
<achin>
very interesting. it looks like your computer tried to contact one of the root name servers in an attempt to learn the nameserver for the org. domain, but couldn't contact any of them
<charlienyc[m]>
Lgierth: ^
neurrowcat has joined #ipfs
<achin>
i'm pretty sure i know what'll happen, but what if you try this, charlienyc[m] :
<achin>
dig @l.root-servers.net. -t NS org.
jkilpatr has quit [Ping timeout: 272 seconds]
patcon has joined #ipfs
SalanderLives has quit [Quit: Leaving]
<charlienyc[m]>
On it
<charlienyc[m]>
What's that command doing?
<achin>
that command is asking the DNS server at "l.root-servers.net" for the "NS" ("nameserver") record for org
<kvda>
is there a recommended setup for running a ipfs deamon (go-ipfs) node? running on a vps wit 1gb ram and it chews up all of it pretty quickly and ramps the CPU usage to 100%
<achin>
charlienyc[m]: interesting, that works
<charlienyc[m]>
Deep packet inspection
<charlienyc[m]>
It's not simple DNS/ip blocking
<achin>
maybe
<charlienyc[m]>
What are you thinking?
<achin>
i'm not sure
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<achin>
i'm not sure how this query could have been distinguished from the previous one that failed
<achin>
i think the "connection timed out; no servers could be reached" message doesn't contain enough info for me to know exactly what failed
<charlienyc[m]>
Should I return the dig +trace?
<achin>
sure, can't hurt
<charlienyc[m]>
Seems as if the connection actually timed out
<charlienyc[m]>
Terrible connection here
<achin>
ok. genuine network connections could of course be a problem
<achin>
uh
<achin>
ok. genuine network connection problems could of course be a problem