<A124>
If anyone feels like reviving the Wikipedia on IPFS, I am into it also. The issue is here: https://github.com/ipfs/archives/issues/20 Going to chime in also. Doing some research. If we take javascript as granted we can create relatively compact version.
<A124>
Mateon1, let me know if you interested also.
<vtomole>
A124: I'm here :)
<vtomole>
What should be added?
apiarian has quit [Ping timeout: 240 seconds]
apiarian has joined #ipfs
<A124>
vtomole Well, I meant reviving the initiative, not just the issue. I'm writing a comment atm.
<vtomole>
Ahh i see
<A124>
In nutshell: Static approach: use compression in IPFS storage or filesystem, Dynamic: HTML generator, XML, zlib, pre-made dictionary.
<A124>
I believe 25GB hopefully can me achieved. And as for images, thumbnails only for the time being shall do.
cwahlers_ has quit [Ping timeout: 255 seconds]
pfrazee has quit [Remote host closed the connection]
<vtomole>
How big is Wikipedia? dynamic files and all?
<A124>
Depends on what data you want. XML Dump as of 2016/12 is 13-14GB.
<A124>
Articles or how it is called, only.
<A124>
Then there are categories, templates, I do not know. And history, that takes a lot.
<xelra>
kythyria[m]: I agree, it should go to one of the $XDG directories.
cwahlers has joined #ipfs
<BanJo[m]>
@ wikipedia comment: Lately history of pages is needed, cause of weird changes to pages
cwahlers has quit [Read error: No route to host]
cwahlers has joined #ipfs
galois_d_ has quit [Remote host closed the connection]
cemerick has joined #ipfs
galois_dmz has joined #ipfs
<A124>
BanJo[m] Well, how to solve history, then static way? just show it if requested? I bet there is some diff library or vcdiff that could do the job here with minimal space requirements. And optionaly one can bundle last X changes onto one file.
<A124>
I never personaly used history, unless I wanted to see the latest edits.
<BanJo[m]>
How much does storage increase on average for history though?
<BanJo[m]>
Probs only moderator approved changes which are shown?
<BanJo[m]>
I read all of that. Sorry for asking a lot, btw, if I can do anything to help out just ask me, some dev experience, but will have to read up on current state of github etc. Regarding the sprints and other stuff. Haven't read up on the channel for a while cause of a weird bug in my Riot app not showing me notifs
cemerick has quit [Ping timeout: 255 seconds]
cemerick has joined #ipfs
<BanJo[m]>
A124: Which language would you want diff written in?
<A124>
BanJo[m] Anything that seems a good choice. I don't really care. The JS could take current article + diff.
<A124>
Which makes updating article burdensome though.
ivo_ has quit [Ping timeout: 276 seconds]
<A124>
Or having to have a chained diff. And let JS do the work, which would be rarely.
<BanJo[m]>
It is available in C# and Java it seems
<A124>
But overall it would be really stupid to kill this because one cannot have history.
anonymuse has quit [Remote host closed the connection]
<BanJo[m]>
Whole idea is to have no need of history, because of uniqueness of pages based on hashes? Or am I mistaken?
<A124>
Then you have to store each page on some disk/server.
<A124>
Got that problem?
<BanJo[m]>
Yeah
<BanJo[m]>
Ofc, only changes based on unique bites is hard
<BanJo[m]>
But more manageable
<BanJo[m]>
But too small
<BanJo[m]>
Hmm, what if you take a forking structure? Like github
<BanJo[m]>
Still too much
<BanJo[m]>
Sorry, I'm rambling, gotta sleep
<A124>
Yeah, the forking does make sense, but git has packs. We will see, thanks for input. Goodnight.
<BanJo[m]>
Sleep well all, or good luck finishing up, or good morning
arkimedes has quit [Ping timeout: 258 seconds]
ivo_ has joined #ipfs
dbri has joined #ipfs
ELLIOTTCABLE is now known as ec
anewuser has quit [Quit: anewuser]
krs93 has joined #ipfs
anonymuse has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wkennington has joined #ipfs
_whitelogger has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
Boomerang has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
tmg has quit [Ping timeout: 256 seconds]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
<whyrusleeping>
kevina: yeah, whatsup?
wrouesnel has quit [Remote host closed the connection]
Boomerang has quit [Quit: leaving]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
Nycatelos has quit [Ping timeout: 240 seconds]
<kevina>
whyrusleeping: just wondering about your comment on https://github.com/ipfs/go-ds-flatfs/pull/9: Unsure... how about we keep things as is, but throw a warning saying we encountered an unexpected file?
<kevina>
?
<kevina>
I will assume yes.
<kevina>
Does that mean not to move it.
<kevina>
whyrusleeping: my other concern was what you meant by "keep things as is".
<kevina>
never mind, I see that go-log is used
<kevina>
so how should I throw a warning?
<whyrusleeping>
By that i mean, move the file over
<whyrusleeping>
"keep the things as is" referring to how you have the code now
<kevina>
oh, good thing I asked then :)
Nycatelos has joined #ipfs
<kevina>
so I should use go-log to log the warning, or output something to the "out" parameter?
<whyrusleeping>
I'm okay with go-log being used
<kevina>
okay
ygrek has quit [Ping timeout: 252 seconds]
<kevina>
whyrusleeping: pushed
<whyrusleeping>
kevina: cool, thanks
tmg has joined #ipfs
anonymuse has quit [Remote host closed the connection]
kulelu88 has quit [Quit: Leaving]
mguentner has quit [Quit: WeeChat 1.6]
Oatmeal has quit [Ping timeout: 240 seconds]
<A124>
So... you think one should aim for max compression or good and fast? Good and fast can be zlib level 4 plus dictionary. Good zlib level 12 (yes, there is one), and a dictionary.
<A124>
Currently extracting the pages from the XML to run dictionary generator and test.
stevenaleach has quit [Quit: Leaving]
<A124>
But in reality one could probably just use 10 000 most important articles as per wikipedia. Yes, little offtopic, sorry.
mguentner has joined #ipfs
reit has joined #ipfs
wallacoloo____ has joined #ipfs
seharder has quit [Ping timeout: 255 seconds]
Boomerang has joined #ipfs
voldyman has joined #ipfs
bret has joined #ipfs
dryajov has quit [Ping timeout: 260 seconds]
esph has quit [Remote host closed the connection]
mguentner2 has joined #ipfs
mguentner has quit [Ping timeout: 240 seconds]
tmg has quit [Ping timeout: 240 seconds]
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
cemerick has quit [Ping timeout: 248 seconds]
cemerick has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
ygrek has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
esph has joined #ipfs
arkimedes has quit [Ping timeout: 260 seconds]
wrouesnel has joined #ipfs
Corg has quit [Ping timeout: 260 seconds]
Oatmeal has joined #ipfs
vtomole has quit [Ping timeout: 260 seconds]
muvlon has joined #ipfs
Boomerang has quit [Quit: leaving]
Aranjedeath has joined #ipfs
ygrek has quit [Ping timeout: 260 seconds]
structuralist has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
tmg has joined #ipfs
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
gts has joined #ipfs
cemerick has quit [Ping timeout: 248 seconds]
ivo_ has quit [Remote host closed the connection]
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
chungy has quit [Ping timeout: 258 seconds]
Oatmeal has joined #ipfs
pfrazee has quit [Remote host closed the connection]
tmg has quit [Ping timeout: 240 seconds]
<kythyria[m]>
Aren't the dumps source code? So you'd need something to make head or tail of transclusions to actually get a mirror.
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
chungy has joined #ipfs
structur_ has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
structuralist has quit [Ping timeout: 255 seconds]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
<A124>
kythyria[m] Well source code... It's the markup. There are also full HTML dumps. But those are large. The whole point is to have mirror of the articles, not the site. And articles themselves are in wiki markup.
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
<kythyria[m]>
Yes. But you need the Template and Module namespaces to capture everything about the articles that isn't media.
wrouesnel has joined #ipfs
<A124>
Those are exported also. But... first it's about the articles
<A124>
He who wanted to build a mountain alone, never had a house.
<kythyria[m]>
Making it browseable rather than something you feed into a local copy of mediawiki will be... interesting... too.
<A124>
Like... who does really and how often use navigation apart from what the articles and search offers?
<A124>
Did you even follow? What means browseable?
<A124>
JS can render the HTML, which is what I said.
<kythyria[m]>
There's a JS version of MW's renderer?
<A124>
Few of them.
<A124>
Also there are transpilers.
<A124>
Or compilers, or how are all the different variants named.
<A124>
Actually, not mediawiki, just renderers
<A124>
I did not ever say it has to be identical.
muvlon has quit [Ping timeout: 245 seconds]
dignifiedquire has quit [Quit: Connection closed for inactivity]
<kythyria[m]>
Without templates and possibly modules you'll lose things like infoboxes too.
<A124>
Whatever is needed.
<A124>
The thing is the principle and methods, not what else.
<Mateon1>
Uh... Why does this work?: ipfs swarm connect /ip4/1.1.1.1/tcp/1/ipfs/QmfY24aJDGyPyUJVyzL1QHPoegmFKuuScoCKrBk9asoTFG
<Mateon1>
It seems that everything except the peerid is ignored
pfrazee has joined #ipfs
appa has joined #ipfs
pfrazee has quit [Ping timeout: 258 seconds]
ulrichard has joined #ipfs
dignifiedquire has joined #ipfs
ygrek has joined #ipfs
<deltab>
maybe this? "Connect ensures there is a connection between this host and the peer with given peer.ID. Connect will absorb the addresses in pi into its internal peerstore. If there is not an active connection, Connect will issue a h.Network.Dial, and block until a connection is open, or an error is returned."
<deltab>
so I'm guessing it already had a connection, or perhaps already knew how to connect
<A124>
Whoever interested, Extracted a 1/1000 sparse sample of enwiki xml dump (105/13MB):
<Mateon1>
deltab: Didn't already have a connection. I made sure to choose a peer from ipfs swarm addrs... oh... (that was not in my ipfs swarm peers already)
<jbenet>
davidar: i think you've seen all the issues posed on archives this week
* davidar
is in the process of trying to get ~100M openstreetmap tiles into ipfs
<jbenet>
davidar: let us know if you want to be involved in anything-- exciting stuff happening
<davidar>
it did seem like the number of issues had somewhat increased recently :)
ygrek has quit [Ping timeout: 245 seconds]
ianopolous has quit [Remote host closed the connection]
ecloud is now known as ecloud_wfh
ylp has joined #ipfs
ianopolous has joined #ipfs
voker57 has quit [Ping timeout: 240 seconds]
rendar has joined #ipfs
voker57 has joined #ipfs
voker57 has joined #ipfs
structur_ has quit [Remote host closed the connection]
structuralist has joined #ipfs
structuralist has quit [Ping timeout: 255 seconds]
<Akaibu>
whyrusleeping jbenet just wondering, could their be any benifit of making a custom virtual machine of ipfs? i'm thinking this because it could be used for stuff like https://github.com/ipfs/archives/issues/87 as jason scott's archiveteam uses there warrior VM to allow easy contribution of bandwidth
<Akaibu>
idk, just thought i would point this out and see if it could be usefull
<muvlon>
what is a "virtual machine of ipfs"?
<muvlon>
you mean like an OS image you can just throw into a VM and it'll run ipfs?
<Akaibu>
more or less i think
<Akaibu>
also could be used as a CDN i think
<Akaibu>
again i'm just tossing out an idea and letting you guys run with it
<muvlon>
yeah I think using the docker image is just fine
<Akaibu>
muvlon: yea but docker isn't truly user friendly
<Akaibu>
an idiot could use VB tbh
<muvlon>
isn't that the big selling point of docker?
<muvlon>
that it's "lxc for dummies"?
<Akaibu>
i was never able to figure it out lol
<muvlon>
I've never been able to do anything advanced with it
<Akaibu>
although that was a couple of years ago
<muvlon>
but just starting an image is literally "docker run foo"
<Akaibu>
again the point is that the end user doesn't have to do much
<Akaibu>
just download and run
<Akaibu>
more or less
<muvlon>
that's just what docker does
<muvlon>
a VB appliance makes it easier for the windows/OSX crowd though
<daviddias>
Moooorning planet!
<Akaibu>
daviddias: yo
<Akaibu>
lol
s_kunk has quit [Ping timeout: 245 seconds]
G-Ray_ has joined #ipfs
<daviddias>
:D
<jbenet>
Akaibu: i had a vm a while back, it was even stored in ipfs. boot-to-ipfs basically
<jbenet>
daviddias: <3
<Akaibu>
... ??
<Akaibu>
i'm kinda confused
<Akaibu>
how does that work
<jbenet>
having an ipfs node in the host OS, which then boots linux with ipfs in it, and runs it.
<muvlon>
how about an ipfs bootloader
<muvlon>
basically it would contain an ipfs implementation, use that to pull an initrd which also contains an ipfs implementation
<muvlon>
and then the rootfs is in ipfs as well, of course :)
<jbenet>
muvlon yeah would be great
<jbenet>
ship it!
Jamal[m] has joined #ipfs
faenil_ is now known as faenil
faenil is now known as faenil_
espadrine_ has joined #ipfs
dem is now known as demize
maxlath has joined #ipfs
mildred2 has quit [Ping timeout: 255 seconds]
tmg has joined #ipfs
s_kunk has joined #ipfs
s_kunk has joined #ipfs
s_kunk has quit [Changing host]
<daviddias>
🚢
Jamal[m] has left #ipfs ["User left"]
mildred2 has joined #ipfs
Mizzu has joined #ipfs
mildred1 has quit [Ping timeout: 240 seconds]
bastianilso_ has joined #ipfs
tabrath has joined #ipfs
Caterpillar has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
G-Ray_ has quit [Quit: G-Ray_]
G-Ray_ has joined #ipfs
elasticdog has quit [Ping timeout: 246 seconds]
chungy has quit [Ping timeout: 252 seconds]
pfrazee has joined #ipfs
chungy has joined #ipfs
elasticdog has joined #ipfs
pfrazee has quit [Ping timeout: 248 seconds]
tabrath has quit [Ping timeout: 258 seconds]
mildred1 has joined #ipfs
jkilpatr has quit [Ping timeout: 240 seconds]
ebarch has joined #ipfs
ebarch has quit [Quit: Ping timeout (120 seconds)]
jkilpatr has joined #ipfs
mguentner2 is now known as mguentner
structuralist has joined #ipfs
<Mateon1>
Okay... I guess this is how long it takes to get all links in a massive repo on Windows. `$ time ipfs refs local | xargs -n1 ipfs refs | wc`: 12300559 25225894 679441140; real 134m38.132s; user 11m45.371s; sys 100m57.384s
<Mateon1>
For reference: ipfs refs local | wc -l = 312695
<Mateon1>
Uh, wrong command. I actually used `ipfs object links` intead of the second `ipfs refs`
<Mateon1>
Ah, so that's why I got a weird result for the wc. object links gives sizes and names as well
<Mateon1>
Let me re-do it all over again
strixx has joined #ipfs
kenshyx has joined #ipfs
LowEel has quit [Ping timeout: 276 seconds]
strixx has quit [Client Quit]
LowEel has joined #ipfs
ebarch has joined #ipfs
ebarch has quit [Remote host closed the connection]
ebarch has quit [Remote host closed the connection]
ebarch has joined #ipfs
<daviddias>
actually
<daviddias>
what if we add to js-ipfs
<daviddias>
in the package.json browser field
<daviddias>
nvm
mildred1 has quit [Quit: WeeChat 1.6]
mildred1 has joined #ipfs
henriquev has quit [Quit: Connection closed for inactivity]
cemerick has joined #ipfs
shizy has joined #ipfs
kenshyx has quit [Quit: Leaving]
arpu has joined #ipfs
cemerick has quit [Ping timeout: 255 seconds]
pcre has joined #ipfs
cemerick has joined #ipfs
shizy has quit [Client Quit]
shizy has joined #ipfs
tmg has quit [Ping timeout: 258 seconds]
nunofmn has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
anonymuse has joined #ipfs
Boomerang has joined #ipfs
nunofmn has joined #ipfs
Boomerang has quit [Ping timeout: 248 seconds]
Boomerang has joined #ipfs
infinity0_ has joined #ipfs
infinity0_ has joined #ipfs
infinity0_ has quit [Changing host]
infinity0 has quit [Killed (sinisalo.freenode.net (Nickname regained by services))]
infinity0_ is now known as infinity0
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
phorse has joined #ipfs
ashark has joined #ipfs
infinity0 has joined #ipfs
Boomerang has quit [Ping timeout: 255 seconds]
infinity0 has quit [Remote host closed the connection]
<A124>
So, I did some more research on the compression of the wiki, and to paste in my issue comment:
<A124>
The only way to have good compression via widespread compression methods seems to be clustering. Compressing per record results in 4-5x the size. Which leads to storage compression.
<A124>
The only other way could be a purpose crafted dictionary + huffman coder.
anonymuse has quit [Remote host closed the connection]
<A124>
So depending on the end user and storage, one might want to use database, or flatfile filesystem over regular/none. Will look into datastore, and for wiki either use zim or gzip, which would result in about 65GB.
<pinbot>
now pinning /ipfs/QmdGuUdZVtZbadEpAhPgFbSYt32c32LhxF6FyZxATereXd
<pinbot>
[host 2] failed to grab refs for /ipfs/QmdGuUdZVtZbadEpAhPgFbSYt32c32LhxF6FyZxATereXd: Post http://[fcfe:eab4:e49c:940f:8b29:35a4:8ea8:b01a]:5001/api/v0/refs?arg=/ipfs/QmdGuUdZVtZbadEpAhPgFbSYt32c32LhxF6FyZxATereXd&encoding=json&stream-channels=true&r=true&: dial tcp [fcfe:eab4:e49c:940f:8b29:35a4:8ea8:b01a]:5001: getsockopt: connection timed out
<richardlitt>
lgierth: I want to run another node thing. Should I put it on the same ssh place as ipfs-helper?
ethhet has joined #ipfs
<ethhet>
Hey, I have used the js-ipfs-api quite a bit and successfully, I plan to setup 3 nodes (private network) and upload documents on each of them and then using ipfs share them by adding these as bootstrap nodes. What exactly is the role of IPNS in all this?
<ethhet>
Basically what CANNOT be done by ipfs alone, and needs ipns to be used as well? I read the docs but this is a little confusing for me. TiA
<A124>
ipns is made for somewhat dynamic content addressing. So that you can get resoure by path instead content hash.
structuralist has quit [Remote host closed the connection]
<ethhet>
doesn't the path require some identifier (like a hash?)
<A124>
You use the signed ipns hash as your namespace to publish the stuff. I hope I use "ok" terms.
<ethhet>
I know it is used for websites as the website will keep getting updated.
anonymuse has quit [Remote host closed the connection]
<ethhet>
is there any advantage of using just ipfs over ipfs+ipns?
<A124>
Yeah, the top level is distruted in DHT and signed by your key. It translates ipns path -> ipfs multihash
<lgierth>
richardlitt: best to put it on cloud.ipfs.team
<lgierth>
i'll be back in a few
<A124>
Probably you should read up more. If you don't need to update your content so that the address of the content remains the same, you don't need ipns.
<A124>
IPNS is like DNS for IPFS.
<richardlitt>
lgierth: I don't think I have access to that?
<ethhet>
I got 3 computers (nodes) that will have 20-30 documents (mp4) files added everyday, and these should be distributed ofc by ipfs. Not sure if I should use IPNS for this use case
<ethhet>
hmm this seems more thorough. WIll check this out in depth. ty
<A124>
If you want a central "directory" that lists them all and when you add something it upadtes, then use ipns also.
<A124>
If you just care about having videos on ipfs, and have anotehr channel tro distribute the hashes to the vids, ipfs alone is enough.
<A124>
In other words, it will not hurt.
<A124>
What can hurt is what you use in the end to refer to those files. If you refer to videos, use ipfs only, if you want to refer to the whole, up to date library, use ipns
Boomerang has joined #ipfs
<ethhet>
Think I understood your point. Just to confirm, why would using ipns for videos be a problem?
<ethhet>
A124: ^^
<A124>
ethhet It would not be a problem, but people want one video be always the same video.
<A124>
ipfs does guarantee that to the degree of math. ipns leaves that decision up to you.
<ethhet>
ahh well, I do not plan to every even remotely modify the documents
<A124>
If the video is not supposed to be updated, then it is counter intiutive to use ipns.
<ethhet>
wait wait, I want these documents to be immutable, ipns wil make that useless right?
<ethhet>
as is it to reference the same object even after it is modified (thereby changing the mathematically derived hash) ?
<A124>
IPNS gives you a layer to translate your multihash + path to ipfs multihashes.
<victorbjelkholm>
from mguentner it seems. Thanks for writing that up :)
<ethhet>
which is why its very good for websites, as you want people to refer the samething after it modified as well
<A124>
If you change something, you get new hash, you can update the hash in your "naming/translation" - the ipns.
<ethhet>
ya but I want to know if a document has ever been tampered with. For that ipns is counter-intuitive right?
<A124>
Yeah. Unless you need granular updates, the simplest is to just have one hash and publish that over IPNS, having only top level link.
<ethhet>
that last line just confused me a bit :D
<A124>
ipns gives guarantee that if it was changes, it was changed by someone that did have your private key.
<A124>
Forget the line that confused you.
<A124>
ipns is signed by your key. And your node needs to be online at least once each 24 hours, to update and keep the hash alive in the DHT.
<ethhet>
ya but I don't trust that node, so I would rather ensure every document uploaded to that node is never changed
<A124>
You can have ipns on your home pc, or anywhere. Does not have to be the same node at all. And you can address in this way any ipfs content.
<ethhet>
basically node_2 should be assured that node_1 cannot in any way modify a document without node_2 not being aware of it. (in case of ipfs, the link would break)
<ethhet>
but use of ipns will allow node_2 to make the change without node_1 knowing of it
<A124>
Yes. ipfs does guarantee that to the degree of sha256 security
<A124>
No it will not. ipns can be on other node completely.
<ethhet>
node_1 and node_2 will run its own ipns
<A124>
But the security relies now also on the key security.
<A124>
IPNS and IPFS is not coupled!
<ethhet>
well node_one and node_two are two autonomous organizations, who need a way to trust to each other.
<A124>
As in DNS does not have to run on webserver.
<ethhet>
ya I know
<A124>
There is no way you can trust someone if you cannot ensure integrity.
<A124>
"I want to trust someone I don't trust."
<ethhet>
which stand alone ipfs provides
<ethhet>
in terms of document integrity from the point it was uploaded
<A124>
No, ipfs provides trust in the content, not in the node.
<A124>
Yes.
<ethhet>
yes but if you trust the content, the node does not have the power to change that content irrespective
<ethhet>
once its published
<A124>
Is this all you need? Or do we have to keep talking about stuff you supposedly already know?
<ethhet>
I had mentioned I know how to use the APIs, but don't have an in-depth understanding of them. Thanks a lot
<A124>
Glad to help.
ethhet has quit [Quit: ChatZilla 0.9.93 [Firefox 50.1.0/20161209095719]]
A124 has quit [Quit: '']
A124 has joined #ipfs
<lgierth>
A124: can we be a bit nicer? no need point fingers at people just because they don't know something
arpu has quit [Ping timeout: 256 seconds]
<mguentner>
victorbjelkholm: yay
Boomerang has quit [Remote host closed the connection]
Boomerang has joined #ipfs
arpu has joined #ipfs
ulrichard has quit [Remote host closed the connection]
Boomerang has quit [Ping timeout: 255 seconds]
pfrazee has joined #ipfs
ylp has quit [Quit: Leaving.]
m_anish__ has joined #ipfs
<m_anish__>
Hi. Noob question. Is it possible to run ipfs on routers running openwrt. Are there any builds for any router models. Is this a sane idea?
<m_anish__>
I am involved with building solar powered wifi mesh networks and offline servers, and would love to decentralize the setup more through tools such as ipfs.
<victorbjelkholm>
basically: "Newer OpenWrt's come with musl instead of glibc, so you'll have to cross-compile go-ipfs with musl instead of glibc. We'll have prebuilt musl-based binaries on dist.ipfs.io in the future."
<dansup>
Igel, yo
<m_anish__>
victorbjelkholm, thanks for the reply. thats from 6 months ago. did the future arrive? :)
<m_anish__>
fwiw, ipfs seems like a wonderful idea. we've been pushing to decentralize our mesh setups (which are decentralized at the network/layer2-3 level). but decentralizing at this level would be really great!
<victorbjelkholm>
lgierth: did the future arrive and we have musl-based binaries on dist.ipfs.io?
<victorbjelkholm>
m_anish__: I'm unsure, but go-ipfs is really easy to build as long as you have go installed. Not sure how easy it is to build based on musl though...
<m_anish__>
but the constraints are that it must run on tiny routers which consume next to nothing power and can handle extreme weather
<m_anish__>
victorbjelkholm, sure. i am going to give this a try, but was just wanting to know if someone had tried this before.. and had any builds for routers to share.
<victorbjelkholm>
m_anish__: sounds like a really interesting project! I'll let someone more knowledgeable about go-ipfs to answer though
<m_anish__>
victorbjelkholm, ok. i (or my bot) will continue to lurk on this channel then... its getting a bit late here,so will come back later. thanks again!
<r0kk3rz>
m_anish__: what would you use ipfs for?
<m_anish__>
r0kk3rz, the idea being that the storage would be decentralized on each router.
<m_anish__>
right now we have a central server in the village mesh network
<m_anish__>
would like to distribute that.on top of that, additionally build apps for people to use to create/share content
<m_anish__>
ipfs would give storage+name resolution, from what I understand.
<r0kk3rz>
m_anish__: yes i guess you could run a gateway on each router
<m_anish__>
this would add reliability, redundancy
<m_anish__>
coz right now, if the server fails, everything fails. and the server is a power hungry (given the constraints) mini PC, spinning hard disks, limited operating temperature range etc.
<jbenet>
mguentner thanks for writing the post! 👍 before i forget, would love to chat with you about that effort, (the gist is, i'd like to support it how we can, and lots of things go-ipfs will ship in 0.4.5 or 0.4.6 will be useful/relevant. one thing i'd like to figure out is special importing of the .nar archives to model the internal graph (see how `ipfs tar`
<jbenet>
works).
anonymuse has quit [Remote host closed the connection]
<lgierth>
m_anish__: you can also build ipfs with musl yourself
<m_anish__>
lgierth, ok will try
<lgierth>
m_anish__: anything in particular you have in mind with openwrt? i have a bit of experience with it
<m_anish__>
thx
<m_anish__>
lgierth, well we have a custom build on openwrt called SECN. based off of their 15.05 release I think. And would love to get ipfs to run on that. we already used the routers as mesh nodes running batman-adv in solar powered mesh networks in remote villages
<m_anish__>
ipfs would allow media layer decentralization and batman does the layer 2 decentralization. :)
<m_anish__>
the routers would also have storage with them. so a decentralized solar powered media network :)
<m_anish__>
and we taught local villagers in another high altitude valley later... and they setup the network on their own
anonymuse has joined #ipfs
<lgierth>
awesome use case -- do you wanna run go-ipfs on the GLi-AR150?
<m_anish__>
lgierth, yep!
<lgierth>
because that's gonna be very hairy. it's mips iirc and the golang tools don't support it. maybe GCC 6 or 7 supports a golang version recent enough for go-ipfs
<m_anish__>
thats what we are hoping for :) if possible
<lgierth>
apart from it being mips, it's also not got a lot of computing power
<lgierth>
and go-ipfs is not optimized at all for wifi networks yet -- it just keeps on talking and eating airtime
s_kunk has quit [Ping timeout: 248 seconds]
<m_anish__>
hm
<lgierth>
you should totally check out if it works out for you :=
<lgierth>
:)
<lgierth>
just saying it's an area we haven't paid much attention to yet
<m_anish__>
lgierth, do you think if its worth a try?
<lgierth>
on this little mips device it's gonna be horribly slow
<lgierth>
a pentium 2 will do better
<lgierth>
better get a raspi
<m_anish__>
hm
<lgierth>
or a similar arm board
<m_anish__>
the thing is
<m_anish__>
ar150 is amazing wifi performance
<lgierth>
yeah it has an ath9k chip eh?
faenil_ is now known as faenil
<m_anish__>
i mean, it is probably the best router that i have come across in this band of performance ... it is the only router that has not been a pita, and actually been fun
<m_anish__>
there are other faster chipsets
<m_anish__>
like ar9531 (i think) 650mhz, and we can get more ram on it .. upto 128/256mb
<m_anish__>
but it wont compare with a rpi
<lgierth>
yeah won't even get close to i
<mguentner>
jbenet: ack
<m_anish__>
so yeah, maybe for initial setup ... a rpi for ipfs and the router for meshing might be an idea
<lgierth>
the thing is, mipses are really slow for cryptographic operations
<lgierth>
and ipfs does lots of these
<m_anish__>
ok
<lgierth>
hashin, signing, encrypting, decrypting
<m_anish__>
hm
<m_anish__>
ok
<lgierth>
i've ported cjdns to openwrt so i've seen how slow this kind of work gets :D
<m_anish__>
:)
<lgierth>
ok i gotta run, let us know how it goes
<m_anish__>
maybe openwrt on raspi doing mesh and ipfs could also be an idea
<m_anish__>
okay
<m_anish__>
lgierth, thanks for the insights!
espadrine_ has quit [Ping timeout: 256 seconds]
<Igel>
lo dan0_0
<Igel>
err. dansup
<Igel>
lo lgierth
* Igel
reading the 300 tb chal data.gov
anonymuse has quit [Remote host closed the connection]
structuralist has quit [Ping timeout: 255 seconds]
<kumavis>
richardlitt: varints are integers that are serialized to binary in a way that allows them to have to dynamic size
<richardlitt>
Why couldn't you have dynamic size in binary normally?
<kumavis>
as you read the number it says whether or not you have reached the end of the number
<kumavis>
well when we have a long string of data containing many things, its hard to know where one ends and the next begins
<kumavis>
theres no whitespace or delimiters, just 1s and 0s
<kumavis>
one solution might be to prefix the number with the length (how many bytes)
<kumavis>
<size in bytes of number><bytes representing the number>
<kumavis>
but then you have the same problem for expressing the size in bytes
<kumavis>
so instead varint indicates when its done
<kumavis>
<number part (not done)><number part (not done)><number part (done!)>
anonymuse has joined #ipfs
<kumavis>
this allows the size of the number to be unbounded
<richardlitt>
Got it.
<richardlitt>
thank you!
<kumavis>
the javascript number maxes out at 9007199254740991 bc it does not employ a strategy like this
<kumavis>
this is a problem when working with ethereum "wei" units which are often >1e18
<dignifiedquire>
kumavis: there are many more problems with numbers in JavaScript :D
<deltab>
well, you're not going to need hundreds of bytes to encode the first number, so you only need a byte at most to encode the length
ygrek has joined #ipfs
anonymuse has quit [Remote host closed the connection]
anonymuse has joined #ipfs
<kythyria[m]>
deltab: That actually uses more space until you get above 64 bits of actual number.
<kythyria[m]>
... not 64
<kythyria[m]>
56
<kythyria[m]>
Most numbers are below 2^56 (and my math might still be wrong).
<deltab>
right, with varint encoding you only take up an extra bit for small numbers
<deltab>
not even extra, as it's borrowed from the byte used for the number
<Mateon1>
kumavis: The Javascript Number type does not max out at that point. that is just the MAX_SAFE_INTEGER (or whatever that constant is called)
<kumavis>
Mateon1: yes thank you, you are correct
<Mateon1>
Number uses floating-point under the hood, which loses precision once you pass that point
<deltab>
so no space overhead for 0–127, nor 256–16383
cemerick has quit [Ping timeout: 255 seconds]
cemerick has joined #ipfs
infinity0 has joined #ipfs
<kythyria[m]>
For 562949953421312..72057594037927935 it doesn't matter which you use, after that length-prefixed wins.
<kythyria[m]>
(assuming one byte of length prefix)
jkilpatr has quit [Ping timeout: 252 seconds]
stevenaleach has joined #ipfs
<stevenaleach>
fy /clear
jkilpatr has joined #ipfs
fossilify has quit [K-Lined]
Encrypt has joined #ipfs
cemerick has quit [Ping timeout: 255 seconds]
cemerick has joined #ipfs
nunofmn has quit [Ping timeout: 276 seconds]
rendar has quit [Ping timeout: 252 seconds]
<stevenaleach>
Performance is a bit different depending on my connection, I notice. Using the hackerspace's wifi, pinning content locally that's on my VPS is absurdly fast - using my T-mobile phone as a hotspot, I just *FINALLY* managed to get a 25mb file pinned on the server to transfer to my laptop. This is odd since the phone has absurdly high bandwidth normally, far better than the hackerspace in fact. Something makes T-Mobile less friendly/functional for
<stevenaleach>
peer-to-peer traffic and I'm not sure what.
<stevenaleach>
Two days to transfer 25 mb on T-Mobile.... a second or two using a normal wifi connection.
<whyrusleeping>
stevenaleach: thats quite odd...
<whyrusleeping>
my guess is that its not actually a transfer speed issue, but a nat traversal issue
<whyrusleeping>
i don't think its easy for someone to dial to you if youre behind a tmobile vpn
<whyrusleeping>
so nodes cant find/connect to you
<stevenaleach>
But the odd thing is going back and forth between server and laptop running "ipfs swarm connect ....." to reintroduce the machines again and again. Sometimes it would succeed and the other node would show in the peers list, but only for maybe 20 seconds or so before I'd have to connect again. Most connect attempts failed, 1 out of every 20 or so finally succeeds again.
<stevenaleach>
Yea, no problem connected over public wif - works great, but using my phone as a hotspot seems to make things barely functional, currently 202 peers on the laptop, 443 connected to the server.
wrouesnel has quit [Remote host closed the connection]
rendar has joined #ipfs
tmg has joined #ipfs
<whyrusleeping>
General question, anyone can answer: Does anyone oppose to me releasing the 0.4.5-rc1 without the automatic migrations code for the new version?
<whyrusleeping>
I have this annoying dependency tree where the migrations code tests use a 'release' of ipfs to test against
<whyrusleeping>
but to make that release, i need to have the new migrations code
<drwasho>
1) what version of the code are you using?
<drwasho>
2) How are these ipns resolves being made?
<whyrusleeping>
(i've actually had people that i've known for over a year not know that i was whyrusleeping)
<drwasho>
3) what sort of stuff is in each ipns entry
anonymuse has joined #ipfs
<cpacia>
1) 4.3 2
<cpacia>
sorry 0.4.3
<cpacia>
2) mostly through gateway
<cpacia>
3) just a hash
<whyrusleeping>
okay cool, that makes things pretty easy then :)
<whyrusleeping>
first off, once 0.4.5 is out, i would highly recommend upgrading to it. It has some fixes that should dramatically increase resolve speed
<whyrusleeping>
2) is it the local gateway? or a global hosted one?
<cpacia>
Great.
<drwasho>
2) Yes
<drwasho>
But mostly local
<cpacia>
It's the local gateway. I've got some other code to do the resolve basically the same as the gateway does but the UI isn't using it yet.
<whyrusleeping>
okay, the other thing is that if you want, theres a 'CAP tradeoff' variable in the dht code you can tweak
<whyrusleeping>
basically makes ipfs be okay with being less certain about having the most up to date record
<whyrusleeping>
but makes it easier to DoS the resolve, or feed it an old (but not invalid) record
<whyrusleeping>
we can also play with the local ipns caching TTLs
wrouesnel has quit [Remote host closed the connection]
<whyrusleeping>
right now they are very aggressive (meaning the cache expires way too fast IMO)
<whyrusleeping>
for the 'stock' ipns stuff we're very focused on consistency
<tcrypt>
cpacia: and drwasho do we prefer consistency or availability?
<tcrypt>
IMO, C
<whyrusleeping>
increased consistency == increased resolve time
<tcrypt>
right
<whyrusleeping>
the value is currently at 16/20
<whyrusleeping>
which is quite high
wrouesnel has joined #ipfs
<drwasho>
I think our approach should be rebasing to 0.4.5 first, and seeing what kind of effect that has
<kythyria[m]>
Hm, that leads me to wonder how to detect "we're probably no longer connected to the public swarm, use cached data even if it's old".
<cpacia>
That's a good question. If we do get an old record I think it would increase the possibility of a rejected purchase (since the data the vendor is expecting is potentially different than what the buyer saw). But Im not sure that's a high likelihood.
ashark has quit [Ping timeout: 252 seconds]
<whyrusleeping>
kythyria[m]: yeah... that ones a hard question
structur_ has joined #ipfs
<whyrusleeping>
cpacia: in practice, you're very unlikely to see an old record. especially since the dht is auto-correcting on ipns records
<cpacia>
cool
<whyrusleeping>
I'm hoping to have the 0.4.5-rc1 out by monday
* whyrusleeping
fingers crossed
<drwasho>
awesome
<drwasho>
Thanks whyrusleeping, aside from this issue, everything else is running really well... a lot of our nodes are offline, but their content is still accessible :D
<drwasho>
Which is what we wanted!
<whyrusleeping>
haha, fantastic :)
<drwasho>
We'll do the update to 0.4.5 when it's ready and see how the performance changes, hopefully it will all go well!
<whyrusleeping>
it probably goes without saying, but please do let us know any problems you encounter
john2 has quit [Ping timeout: 276 seconds]
<drwasho>
no problem
<drwasho>
I'll try and hang out here a bit more
<whyrusleeping>
drwasho: cool
<whyrusleeping>
i'd also be interested to hear how your rebasing process is
<whyrusleeping>
and what pain points you have there
<whyrusleeping>
might be something we can do to make that easier
palkeo has quit [Quit: Konversation terminated!]
<kythyria[m]>
Why does `ipfs swarm addrs` list a few peers with hundreds and hundreds of addresses that vary only in port?
<whyrusleeping>
kythyria[m]: ugh, thats a NAT issue
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<kythyria[m]>
Ah
<whyrusleeping>
some routers assign a different port for EACH outgoing connection, which results in everyone they connect to getting a different address for them, and then everyone shares these addresses around (STUN type stuff)
<whyrusleeping>
its something we need to fix
<deltab>
how many of those are reachable by others?
<whyrusleeping>
probably exactly none
<whyrusleeping>
(at least in that case, in other cases thats the only way we discover good addrs)
<deltab>
maybe there's a way to test them before publishing
<deltab>
or include a provenance
<deltab>
likewise with internal addresses going external
maxlath has joined #ipfs
structur_ has quit [Remote host closed the connection]
structuralist has joined #ipfs
<whyrusleeping>
yeah, our address handling needs work
maxlath has quit [Client Quit]
Boomerang has quit [Quit: leaving]
<whyrusleeping>
ideally we would share those potentially bad addresses, and other peers would only try them if they got the same address from more than one confirmed peer
<whyrusleeping>
(provenance, as you say would help this)