<hermanjunge>
Boom. Love wikipedia. I remember when it started to became popular I'd just `random` into articles
<kode54>
that also reminds me
creeefs has quit [Ping timeout: 260 seconds]
<lgierth>
M-anomie: ah got it -- so IPLD is basically an evolution of JSON-LD that works with content hashes instead of URIs
<kode54>
I tried to ask someone in their main IRC channel if I should do something about an article that uses a reference to a news source they no longer accept
<M-anomie>
Yes, but it loses the self-documenting semantics and the "subject predicate object" thing doesn't it?
creeefs has joined #ipfs
<lgierth>
M-anomie: that also means you can't have two objects link each other, as there's a chicken/egg problem with the hash :) that's the main thing that's different from a URI-centric model
<lgierth>
subject pridicate object is just one way of representing a graph though right? i'm not too deep in semantic web things
<lgierth>
i basically only got exposed to JSON-LD
<lgierth>
i suppose you could think of ipfs as a distributed, content-addressed graph database
<lgierth>
(only that it's not yet very good at typical graph operations)
<hermanjunge>
lgierth, basically S-P-O is all semantic data is about, and how you work on the data. DBPedia is the best. I'll show you this superquery
<hermanjunge>
So you can ask stuff like "Soccer players, who are born in a country with more than 10 million inhabitants, who played as goalkeeper for a club that has a stadium with more than 30.000 seats and the club country is different from the birth country"
* Kythyria[m]
has never worked out a nice way of storing things like "this is the third item in that collection" in SPO format
creeefs has quit [Client Quit]
<Kythyria[m]>
RDF's approach is to have a way of forming an infinite number of predicates for "nth item in"
<hermanjunge>
I imagine that in order to store this as IPLD links, we would have to link the SPO elements themselves, and on top of that, create kind of indexes. People will collect those indexes, keeping the best, discarding the worst. So you could have the "FIFA index of football players", "CIA world book of cities", whatever...
creeefs has joined #ipfs
<lgierth>
or convert to JSON-LD first and go from there
<aseriousgogetta>
fack, anyone here a ruby pro?
<aseriousgogetta>
damn ruby gems are giving me hell
<aseriousgogetta>
ERROR: Failed to build gem native extension.
<lgierth>
nope this is mostly an ipfs and distributed web channel
<aseriousgogetta>
i can pastebin if anyone has a clue
<lgierth>
##ruby can help i guess
<aseriousgogetta>
yeah, im working on ipfs and decentralized networking
<lgierth>
ok fiiine :)
<aseriousgogetta>
i wont bother about my ruby issues anymore
<aseriousgogetta>
:P
<lgierth>
probably some -dev package missing -- check the error log, the path is printed somewhere there
<aseriousgogetta>
yar
<aseriousgogetta>
can't find header files for ruby at /usr/lib/ruby/include/ruby.h
<lgierth>
apt-get install libruby-dev
<lgierth>
or whatever your system's package manager is :)
<aseriousgogetta>
ofc it's apt-get
<aseriousgogetta>
:P
<aseriousgogetta>
but its still feeding me shit, but i appreciate the help. will look around for something specific
<lgierth>
i usually use apt-file search when i need to find the package that contains a certain .h
stoopkid_ has quit [Quit: Connection closed for inactivity]
palkeo_ has joined #ipfs
Aranjedeath has joined #ipfs
palkeo_ has quit [Ping timeout: 255 seconds]
anewuser has joined #ipfs
rodolf0 has joined #ipfs
rodolf0 has quit [Ping timeout: 258 seconds]
cellvia has joined #ipfs
<cellvia>
im sorry, im sure this is a common question, and partially off topic, but the filecoin room looks relatively bare and not sure where else to ask. so: where could i, a common netizen, go about acquiring some Filecoin during the ICO next week? it seems unfair that only US accreddited investors can buy it. surely there is a way.
anewuser has quit [Quit: anewuser]
creeefs has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<achin>
see #filecoin for filecoin-related question
jsgrant has quit [Read error: Connection reset by peer]
jsgrant has joined #ipfs
Oatmeal has joined #ipfs
jsgrant has quit [Remote host closed the connection]
aceluck has quit [Remote host closed the connection]
aceluck has joined #ipfs
spacebar_ has quit [Quit: spacebar_ pressed ESC]
aceluck has quit [Ping timeout: 268 seconds]
<limbo_>
Is there a different error code from gateways for hashes that don't exist/are unavailable to files the gateway has blacklisted? (either for legal, bandwidth abuse, or the admin jut not wanting to serve that hash)
chris613 has quit [Quit: Leaving.]
Aranjedeath has quit [Ping timeout: 246 seconds]
m0ns00n has joined #ipfs
zpaz has joined #ipfs
gah111 has quit [Remote host closed the connection]
ygrek has joined #ipfs
appa_ has joined #ipfs
<whyrusleeping>
limbo_: yeah, blocked content from our gateways returns a 451
<whyrusleeping>
"unavailable for legal reasons"
palkeo_ has quit [Ping timeout: 255 seconds]
ygrek has quit [Ping timeout: 248 seconds]
Oatmeal has quit [Ping timeout: 276 seconds]
domanic has joined #ipfs
vtomole has quit [Ping timeout: 260 seconds]
_whitelogger has joined #ipfs
m0ns00n has quit [Quit: quit]
domanic has quit [Ping timeout: 240 seconds]
domanic has joined #ipfs
rendar has joined #ipfs
<limbo_>
For all kinds of blocks, or just legal ones?
<limbo_>
for example, lets say someone's abusing a free gateway, and the operator wants to put a stop to it.
<whyrusleeping>
we're currently applying those via nginx
domanic has quit [Ping timeout: 240 seconds]
<limbo_>
ahh, I misread the "our" in "our gateways".
pat36 has joined #ipfs
maxlath has joined #ipfs
Caterpillar has joined #ipfs
sirdancealot has quit [Remote host closed the connection]
aseriousgogetta has quit [Remote host closed the connection]
ianopolous has quit [Ping timeout: 255 seconds]
ianopolous has joined #ipfs
ianopolous has quit [Ping timeout: 240 seconds]
ianopolous_ has joined #ipfs
zpaz has quit [Quit: Leaving]
corvinux has joined #ipfs
sirdancealot has joined #ipfs
sirdancealot has quit [Remote host closed the connection]
larpanet has joined #ipfs
maxlath has quit [Ping timeout: 255 seconds]
Lymkwi has quit [Ping timeout: 240 seconds]
Lymkwi has joined #ipfs
jungly has quit [Remote host closed the connection]
jungly has joined #ipfs
corvinux has quit [Ping timeout: 260 seconds]
domanic has joined #ipfs
aceluck has joined #ipfs
aceluck has quit [Ping timeout: 240 seconds]
maxlath has joined #ipfs
seagreen has quit [Ping timeout: 246 seconds]
corvinux has joined #ipfs
maxlath has quit [Ping timeout: 255 seconds]
ianopolous_ has quit [Ping timeout: 255 seconds]
aceluck has joined #ipfs
jkilpatr has joined #ipfs
domanic has quit [Ping timeout: 248 seconds]
erictapen has joined #ipfs
ilyaigpetrov has joined #ipfs
ianopolous has joined #ipfs
m0ns00n has joined #ipfs
droman has joined #ipfs
ianopolous has quit [Ping timeout: 255 seconds]
ianopolous has joined #ipfs
aceluck has quit []
erictapen has quit [Ping timeout: 260 seconds]
domanic has joined #ipfs
}ls{ has joined #ipfs
rcat has joined #ipfs
rcat has quit [Quit: leaving]
erictapen has joined #ipfs
pat36 has quit [Read error: Connection reset by peer]
rcat has joined #ipfs
pat36 has joined #ipfs
rcat has quit [Client Quit]
rcat has joined #ipfs
ianopolous_ has joined #ipfs
<ilyaigpetrov>
If I host lots of wiki pages on ipfs -- how could I implement page searching in this kind of wiki?
<rafajafar>
if you hope into #soundbutt on efnet, they're not concerned with getting it, they're concerned with hosting it
<patagonicus42>
Ah, cool.
<rafajafar>
which is why I'm suggesting maybe you guys help them
<rafajafar>
advise them or something
patagonicus42 is now known as patagonicus
<rafajafar>
they have some confusion about IPFS and how it could help
<rafajafar>
or even if it could
<rafajafar>
clearly systems will need to be built to support the data, from what I'm hearing
<rafajafar>
the nature of that plus the cost/benefits is the issue
<rafajafar>
they're worried about nodes dropping so they dont want to not host the data centrally
<rafajafar>
but I'm trying to suggest to them it can assist with more than that, such as bandwidth
<rafajafar>
just seems like a solid early public use case of the tech to me
<patagonicus>
Yeah, could probably work.
shizy has joined #ipfs
rcat_ has quit [Ping timeout: 240 seconds]
rcat has joined #ipfs
gaf_ has quit [Ping timeout: 258 seconds]
chaostracker has joined #ipfs
droman has quit []
rcat has quit [Quit: leaving]
gaf_ has joined #ipfs
<SchrodingersScat>
rafajafar: well, you know at least archive team would apparently pin it?
<rafajafar>
?
<SchrodingersScat>
idk if Jason Scott cares about ipfs, but he'd probably pin the SoundCloud data if provided
<patagonicus>
I doubt it. They were told to not download it, I'm assuming that includes not rehosting it from someone else who downloaded it.
rcat has joined #ipfs
<SchrodingersScat>
patagonicus: I'm surprised they heeded soundclouds plea. But it wouldn't cost soundcloud any bandwidth if they download it via IPFS, it would be that reddit user or whatever. although they're probably in the same efnet channels, likely on top of it.
<_mak>
is it possible to get the filesize of a hash without downloading the full file?
<SchrodingersScat>
_mak: sure, file ls should do this, no?
<SchrodingersScat>
oh, or not
<SchrodingersScat>
_mak: but you should still be able to get filesize
gmoro has quit [Remote host closed the connection]
<_mak>
SchrodingersScat: is in the json yeah
<_mak>
I wonder if the ls works for files I don't have pinned locally
rcat has quit [Ping timeout: 255 seconds]
rcat has joined #ipfs
scde has joined #ipfs
<_mak>
it does
<_mak>
awesome, thanks
droman has joined #ipfs
<SchrodingersScat>
_mak: and am I silly for using curl -I 127.0.0.1:8080/ipfs/blahblah to get the file info?
<SchrodingersScat>
I wonder if that caches it, never really tested.
<_mak>
I'm using what you have said: ipfs --enc=json file ls
<SchrodingersScat>
oh, k
gmoro has joined #ipfs
shizy has quit [Ping timeout: 268 seconds]
<SchrodingersScat>
_mak: checking 'ipfs commands' there's also files ls, which may have different options, and files stat which is likely more what you're looking for.
<_mak>
hmm, I'll check it out
<_mak>
tks
jungly has quit [Remote host closed the connection]
rcat has quit [Ping timeout: 240 seconds]
rcat has joined #ipfs
caladrius has quit [Ping timeout: 255 seconds]
rcat has quit [Ping timeout: 260 seconds]
rcat has joined #ipfs
ianopolous has joined #ipfs
ianopolous_ has quit [Ping timeout: 276 seconds]
chris613 has joined #ipfs
maxlath has joined #ipfs
cblgh has quit [Ping timeout: 260 seconds]
cblgh has joined #ipfs
cblgh has joined #ipfs
cblgh has quit [Changing host]
creeefs has joined #ipfs
A124 has quit [Quit: '']
A124 has joined #ipfs
gmoro has quit [Remote host closed the connection]
chris613 has quit [Quit: Leaving.]
stoopkid_ has joined #ipfs
scde has quit [Quit: Leaving]
gmoro has joined #ipfs
rcat has quit [Ping timeout: 240 seconds]
creeefs has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
rcat has joined #ipfs
<ilyaigpetrov>
What puzzle pieces do I need to create an ipfs-based wiki uncensorably readable from a browser?
<ilyaigpetrov>
1. WebRTC-based p2plib for talking to other ipfs nodes, preferably startable with any ipfs-peer address (not just a central server address).
unprison has quit [Remote host closed the connection]
aceluck has quit [Ping timeout: 240 seconds]
jhand has joined #ipfs
appa__ has joined #ipfs
corvinux has quit [Quit: Leaving]
hashcore has joined #ipfs
jsgrant has joined #ipfs
intern has joined #ipfs
<ilyaigpetrov>
lidel: I have a few stylistic quibble: word in captions should start with capitals, trailing spaces should be avoided (configure your editor to show them).]
<ilyaigpetrov>
lidel: if you use webpack you will need no awkward copying from node_modules to sources
<ilyaigpetrov>
lidel: so travis-CI runs your mocha/sinon tests on its server -- is this right? Never used it.
<lidel>
ilyaigpetrov, yes, it runs them in real browser
appa__ has quit [Ping timeout: 255 seconds]
<ilyaigpetrov>
lidel: both firefox and chromium, for free?
<lidel>
only Firefox right now, but you can run chromium too, sure
<lidel>
the only dependency is to start fake X server via /etc/init.d/xvfb start
rcat has quit [Quit: leaving]
<SchrodingersScat>
xvfb is nice for silly things with guis :)
Encrypt has quit [Quit: Quit]
<lidel>
ilyaigpetrov, I was hesitant to introduce compilation phase, as it may slow down review process at AMO (http://addons.mozilla.org/), if we have at least one review of WebExtension version accepted then we will probably refactor to use webpack
<ilyaigpetrov>
lidel: do you need any help with writing extensions? Any ideas waiting to be implemented?
<moshisushi>
TypeError: crypto.generateKeyPair is not a function
bielewelt1 has joined #ipfs
bielewelt has quit [Ping timeout: 248 seconds]
appa__ has quit [Ping timeout: 248 seconds]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
appa__ has joined #ipfs
maxlath has joined #ipfs
<domanic>
Question: I was wondering how many items there are in the whole ipfs DHT?
ianopolous_ has quit [Ping timeout: 246 seconds]
<domanic>
how many unique blocks is one thing, how many peers have each block is another interesting question.
<whyrusleeping>
domanic: given the nature of the system, thats a hard question to get an answer to
<domanic>
whyrusleeping: you should be able to sample the DHT though and get an approximate answer though right?
<whyrusleeping>
yeah, you can approximate that way
<whyrusleeping>
but not all nodes even advertise all the content they have
<whyrusleeping>
i know mine don't, otherwise they would be using all their bandwidth 24/7 to tell everyone about the terabytes of stuff i have
<domanic>
no, but that data is basically "cold" no way for peers to ask you for it either
bielewelt has joined #ipfs
<domanic>
it's there but it's not retrivable, like the "deep web"
<whyrusleeping>
well no, you could connect to my peer and ask for it over bitswap
whythat has joined #ipfs
<domanic>
okay, but only if it knew somehow to ask
<domanic>
anyway, lets call that "deep ipfs" and "shallow ipfs" is the stuff that is easily accessible because you can look it up in the DHT
bielewelt1 has quit [Ping timeout: 240 seconds]
<daviddias>
moshisushi: changed that today
<whyrusleeping>
domanic: fair enough
<daviddias>
moshisushi: released like 1 hour ago. Now it sits behind `crypto.keys`. Sorry if the change wasn't clear
<whyrusleeping>
then yeah, you could essentially just sample a longer lived node ( > 1day ) and multiple that by the total number of nodes to get a rough estimate
<daviddias>
are you using ~ or ^ in your dep tree?
<moshisushi>
daviddias: oh it's clear but the benchmark in js-libp2p-secio hasn't been changed as far as I can tell
<whyrusleeping>
probably divided by 20 due to kademlia
<moshisushi>
sorry if I'm making a mistake here.. thought I was looking at latest in master
<domanic>
whyrusleeping: yup, so do you at least know the number of node's offhand?
<domanic>
whyrusleeping: when you say your node doesn't advertise, that is because it would have to reannounce each block, correct?
<whyrusleeping>
right
<domanic>
how big is the average block? (understand it uses rabin, so they are randomly sized, right?)
<daviddias>
domanic: default is 256KiB
<whyrusleeping>
actually, rabin isnt the default, so data blocks tend to be 256k
<daviddias>
rabin is not used by default
<domanic>
daviddias: how come?
<whyrusleeping>
rabin is difficult to get working right
<whyrusleeping>
notably, it messes with storage
<whyrusleeping>
disk blocks are generally 1k or 4k
shizy has joined #ipfs
<whyrusleeping>
and if you don't make your blocks align with disk blocks, you will have an overhead of the extra space
<daviddias>
moshisushi: there you go :)
<domanic>
right... so are you just using aligned blocks like bittorrent?
<whyrusleeping>
yeap
<whyrusleeping>
we started out that way, and planned on moving to rabin, but havent found it compelling enough to do so
<domanic>
right, yeah. it seemed so promising. I did some experiments with rabin once (for diffing) and found it wasn't as good as i hoped
<whyrusleeping>
maybe given a better storage backend it would work
<domanic>
how often do you have to reannounce something to the DHT?
vtomole has quit [Ping timeout: 260 seconds]
<moshisushi>
daviddias: reason I was poking with this was a browser vs. node test I've set up to try to narrow down where my performance problems are introduced... committed the test (based on secio benchmark) here: https://github.com/moshisushi/ipfs-lab/tree/master/secio-test
<moshisushi>
runs 5-10 times slower in my browser, but not sure yet how fair that comparison is
deep-book-gk_ has joined #ipfs
<daviddias>
moshisushi: have you tried multiple browsers? Any significant different between chrome and firefox?
deep-book-gk_ has left #ipfs [#ipfs]
shizy has quit [Ping timeout: 260 seconds]
<moshisushi>
daviddias: yah Firefox seems a little faster than Chrome.. lower variation between test runs as well
<daviddias>
moshisushi: I believe that is just because Chrome's aggressive throttling policies
<daviddias>
if you are not paying 100% attention to the UI, chrome will drop resources to that specific tab
<daviddias>
and since so much of the encryption happens at the main thread
<daviddias>
it just drags the whole connection down
<domanic>
whyrusleeping: how does ipfs decide to announce each block? or is there some strategy for estimating which peers might have a block (if it's not advertised directly)
<moshisushi>
daviddias: yeah I remember you mentioned that
lupi_ has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<daviddias>
moshisushi: might be more useful to do all the benchmarks in firefox that won't have this external factors
<moshisushi>
daviddias: I'm gonna actually try to package the unmodified secio bench for browsers instead of my slightly modified one
<daviddias>
and by improving in firefox, you will improve in chrome as well
<daviddias>
domanic: for retrieve: Every time you fetch a file (ipfs get QmHashFile) bitswap will try to get the file by just issuing one DHT query, the rationale is that nodes that have one block of the file will also have the other blocks (or be close to have them), if after a $TIMEOUT we haven't received all the blocks we have in our queue, then it issues another DHT query
<daviddias>
domanic: for provide: It will provide the whole file, but all of those provide DHT calls will have the set of hashes, so that you don't exhaust the network providing each block individually (it is still brutal though)
lupi_ has joined #ipfs
<moshisushi>
daviddias: a strangest thing is, after I changed to using the new js-libp2p-crypto version, 0.9.x instead of 0.8.8, and changing to using crypto.keys.generateKeyPair, performance of my bench got extremely bad even in nodejs!
creeefs has joined #ipfs
<domanic>
daviddias: so you only announce the whole file's hash, then it's assumed you have the subblocks?
<domanic>
and if an object links to another - then you can probably ask for the linked block from peers with the linking block
<daviddias>
domanic: you always 'announce' (we use the word 'provide' everywhere) every block as a provider of the content. You only query (look for who is providing) the first block first and then do more queries if there are still blocks missing
<domanic>
daviddias: got it
<moshisushi>
daviddias: "and invite the community to start using it"... heh perhaps I shouldn't poke around with this yet :)
<domanic>
daviddias: once you retrive a block, do you start providing it automatically?
erictapen has quit [Ping timeout: 246 seconds]
chrisbratlien has joined #ipfs
lassulus has quit [Ping timeout: 255 seconds]
<daviddias>
moshisushi: I think you are providing a lot of value by creating more benchmarks and identifying perf issues and seems that you don't need a tutorial to understand how it works internally at all :)
maxlath has quit [Quit: maxlath]
<daviddias>
domanic: you do
<moshisushi>
daviddias: oh yeah, accomplished copy-paste monkey u know!
<daviddias>
moshisushi: on your bench marks, separate the keypair generation from the actual data transfer
<moshisushi>
daviddias: yep that's what I'm about to do
<domanic>
daviddias: how long does the DHT remember the providers?
<daviddias>
we use http://npmjs.org/keypair for the RSA key generation (because for some reason, the Node.js module never added this feature ?!??)
<daviddias>
and http://npmjs.org/keypair is pure JS, which makes it slow. The other solution would be to bring node-webcrypto-ossl, but that native dependency makes life hard in multiple cases
lassulus_ is now known as lassulus
lassulus has quit [Changing host]
lassulus has joined #ipfs
<moshisushi>
daviddias: yeah browser support is one miserable aspect of that, I suppose :)
<daviddias>
an alternative is to encourage all of this crypto operations to happen in WebAssembly land
<domanic>
the crypto module and webcrypto is open ssl based anyway, just use libsodium instead
<domanic>
hashes and symmetric encryption is so fast it doesn't really matter if they are in javascript
<domanic>
but if you want to verify lots of signatures js is a bit slow
Ivion has joined #ipfs
creeefs has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<daviddias>
domanic: yeah, libsodium is fantastic. makes decisions about crypto super simple
<daviddias>
we need to support RSA though
<daviddias>
(we actually use tweetnacl for other things in libp2p)
<chrisbratlien>
Frustrated that iTunes and Downcast consider podcast episode links which point to my localhost IPFS getway as invalid, simply because their multihash URLs don't end in ".mp3". Kudos to gPodder, however, for working without issue
<domanic>
daviddias: send a robot back from the future to tell jbenet to use libsodium instead
rtjure has joined #ipfs
creeefs has joined #ipfs
gmoro has quit [Ping timeout: 240 seconds]
<daviddias>
:D
Bhootrk_ has joined #ipfs
creeefs has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]