<lgierth>
also, use the /ipfs/<path>, as ipns isn't complete yet
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
devbug has joined #ipfs
elavoie_ has joined #ipfs
Luzifer_ has joined #ipfs
feross_ has joined #ipfs
null_rad- has joined #ipfs
pfraze has joined #ipfs
pfraze_ has quit [Ping timeout: 250 seconds]
kumavis_ has joined #ipfs
necro666_ has joined #ipfs
AtnNn_ has joined #ipfs
Skaag_ has joined #ipfs
xenkey1 has joined #ipfs
daviddias_ has joined #ipfs
cleichner_ has joined #ipfs
svetter_ has joined #ipfs
M-jbenet1 has joined #ipfs
<blame>
What is the ideal liklihood of a hashing being 1 in a bloom filter?
akhavr has quit [*.net *.split]
Guest62677 has quit [*.net *.split]
reit has quit [*.net *.split]
AtnNn has quit [*.net *.split]
FunkyELF has quit [*.net *.split]
daviddias has quit [*.net *.split]
cleichner has quit [*.net *.split]
null_radix has quit [*.net *.split]
oed has quit [*.net *.split]
nicknikolov has quit [*.net *.split]
xenkey has quit [*.net *.split]
Skaag has quit [*.net *.split]
svetter has quit [*.net *.split]
giodamelio has quit [*.net *.split]
feross has quit [*.net *.split]
elavoie has quit [*.net *.split]
M-jbenet has quit [*.net *.split]
davidar has quit [*.net *.split]
kumavis has quit [*.net *.split]
ogd has quit [*.net *.split]
Luzifer has quit [*.net *.split]
necro666 has quit [*.net *.split]
elavoie_ is now known as elavoie
svetter_ is now known as svetter
Luzifer_ is now known as Luzifer
daviddias_ is now known as daviddias
cleichner_ is now known as cleichner
oed has joined #ipfs
feross_ is now known as feross
<lgierth>
jbenet: could you add dnslink-deploy.git to the website team, and also add richardlitt to it? i think we locked ourselves out of that repo by transfering it over
M-davidar has joined #ipfs
<jbenet>
giodamelio: that's great!! be wary that ipns is still being worked on and not robust yet.
<blame>
right now the only indexed page is the ipfs homepage!
<lgierth>
i just noticed the protected branches feature which prevents force-pushes to certain branches
<achin>
i'm looking for a description for the on-disk block format (i.e. how to read files in ~/.ipfs/blocks/). where's the best place to look up this info?
Guest73396 has quit [Read error: Connection reset by peer]
samiswellcool has quit [Quit: Connection closed for inactivity]
<fazo>
wow, ipfs is really nice
<fazo>
except managing pinned files with go-ipfs
<fazo>
too bad I can't understand go :(
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
fazo: yeah, we want to find a nicer way to manage pinned files
<whyrusleeping>
always interested to hear ideas on UX for that
<fazo>
I tried to look how pinned files are stored, but no luck yet
<fazo>
I mean how the references are stored
<fazo>
I think a solution could be to associate a local name to the pinned hash
<fazo>
because you either "ipfs add", so you can get the name from the actual file names
<fazo>
or you manually pin stuff
<fazo>
so having the possibility to label a pinned hash could be just enough
<fazo>
just locally
<fazo>
and maybe storing the disk space that a file/folder takes up
<fazo>
so when you "ipfs pin ls"
<fazo>
you see the hashes, the labels (if they were ever set) and the size
<fazo>
way easier to manage
qqueue has joined #ipfs
shea256 has joined #ipfs
<whyrusleeping>
yeah
<whyrusleeping>
i like it
<whyrusleeping>
something like 'ipfs pin add <hash> -tag=blah'
<whyrusleeping>
or something
<fazo>
then you could "ipfs label <hash> <name>" to set a label and "ipfs label <hash>" to get it
<fazo>
it also serves as a way to remember what an hash actually is
<fazo>
what it contains*
<fazo>
@whyrusleeping: what do you think?
<fazo>
yeah of course
<spikebike>
whyrusleeping: and use tags for all metadata?
<spikebike>
whyrusleeping: like say parent dir
<fazo>
or if you pipe data to 'ipfs add' you could do something like '<data> | ipfs add --label=somefile'
<whyrusleeping>
mmm, ipfs label... maayybeee
<fazo>
whyrusleeping: well, it needs to be ironed out of course
<spikebike>
maybe autopopulate some tags, like say creation-date
<fazo>
but I think we really need a way to label stuff locally or else when we have hundreds of pinned files and want to remove that 2 gb folder...
<whyrusleeping>
i think i like the direction this is going
<whyrusleeping>
i was mulling over an ipfs alias for peer IDs
<whyrusleeping>
but i think having a generic 'label' makes more sense
<fazo>
yes, labeling peers too is a great idea
<spikebike>
tags seem like a natural compliment to a content addressable store.
<whyrusleeping>
especially one who pulls heavily from git
<whyrusleeping>
;)
<fazo>
uhm, I think label is the better word
<fazo>
because you can have multiple tags on an object, but the label is unique
<fazo>
the point is to recognize objects, categorizing them doesn't look as useful to me
<spikebike>
ipfs stat would show all of an objects tags
<spikebike>
seems like there should only be one unique label per object, the sha256 hash
<fazo>
spikebite: by tags, you mean user definable tags or just metadata like the date it was pinned?
<spikebike>
fazo: ideally some defaults like the standard unix metadata (size, owner, creation date, ...)
<fazo>
spikebite: yes, of course, but I was thinking about a local, optional and user defineable label used to recognize pinned objects
<spikebike>
and user defineable tags
<spikebike>
fazo: an image app might for instance expose image metadata to the IPFS tags
<spikebike>
(size, resolution, x/y, ISO, long/lat, etc.
<spikebike>
)
<spikebike>
IMO one of the biggest problems with current traditional filesystems is the lack of user defined metadata
<spikebike>
thus each app has a different and incompatible way to add that metadata, like apple's resouce forks, or custom giant binary blobs for databases, etc.
<fazo>
spikebite: yeah, you would need to include metadata in the actual data to do that I think
<fazo>
spikebite: I agree with local metadata and user-defineable label, maybe even tags even though I don't really see the point, but honestly I think this data needs to be local
<fazo>
spikebike: I guess you're right. Though including metadata like that would probably destroy the protocol
<fazo>
spikebike: what I have an mp3 song which is the same identical song with identical bytes as the one you have, but mine has different metadata
<spikebike>
dunno. put, and get could be similar, but there's need to be a tag/untag that's similar
<fazo>
spikebike: is the hash different? then it's not the same file?
<spikebike>
tags shouldn't change the hash.
<spikebike>
(IMO)
<spikebike>
so when you upload a movie, picture, or audio file the sha256 of the data should never change, and you can always read it out to a normal filesystem unchanged
shea256_ has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
<spikebike>
similarly changing metadata on a unix filesystem doesn't change the checksum of the underlying files
<spikebike>
(unless you count the directory itself as a file)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
spikebike: yes, you're right. But there is no way to share them via IPFS if they can't change the hash
<fazo>
spikebike: including file metadata in the protocol requires it to be rethinked almost entirely
<fazo>
spikebike: I don't understand, if metadata doesn't affect the hash of the data, where is the metadata stored?
<spikebike>
hrm, well does the ipfs protocol include a directory object?
<spikebike>
could the tags be hidden in the directory object, and then each object has a reference to the sha256 for the subobjects?)
<spikebike>
that way tag updates change the directory object, but not any of the referenced objects.
<fazo>
spikebike: hey, it's not bad.
<fazo>
spikebike: it's actually a nice idea
<spikebike>
seems like IPNS isn't done anyways, and that's where you need the metadata mostly anyways
Spinnaker has joined #ipfs
<fazo>
spikebike: the way I'm using ipfs, I don't really need ipns
<fazo>
spikebike: but I need the ability to label pinned objects :(
<spikebike>
IMO enabling anything local only sounds like a bad idea
<fazo>
spikebike: why is that?
<spikebike>
It's not I(except for special local access)PFS
<spikebike>
and if someone has a directory of valuable tagged data seems silly to not be able to share the tags
<spikebike>
I'm all for userA making a new view of UserB's data with their own personal tags
<whyrusleeping>
^ +1
<spikebike>
but userA should be able to share that view, not make it local only
voxelot has quit [Ping timeout: 250 seconds]
<spikebike>
so the filesystem should be a global namespace.
<fazo>
spikebike: makes sense
voxelot has joined #ipfs
<spikebike>
there is a new tagging standard for files system, basically a generation newer than EXIF
<spikebike>
kinda cool, I can now tag photos locally on my desktop, and then upload them to a web gallery that will make different views on the same data based on their tags
<fazo>
spikebike: what about a way to recognize what an hash represents in the pinned hash list
<fazo>
spikebike: I'm talking about go-ipfs
<fazo>
spikebike: do you think being able to set a local name to recognize hashes and peers is a good idea?
<spikebike>
well the directory object, lets thing of it like a SQL database for the moment
<spikebike>
could have a map between tags and sha256sums (referring to an object)
qqueue has quit [Ping timeout: 240 seconds]
<spikebike>
of course you could query it by asking what tags do you ahve for a specific checksum
<spikebike>
which would describe the object
qqueue has joined #ipfs
<spikebike>
and some tags would have special meaning, like say that they are pinned
<spikebike>
so you could do a query to show all pinned files and their filenames.
<fazo>
spikebike: wow, you found a way better solution than mine.
<spikebike>
I'm biking home shortly, I'll ponder. a Sha256 for the directory object plays well with CAS and CoW
<spikebike>
it would still be painful to iterate across all dirs though
magneto1 has joined #ipfs
akhavr1 has joined #ipfs
pfraze has quit [Remote host closed the connection]
wasabiiii has quit [Ping timeout: 250 seconds]
wasabiiii1 has joined #ipfs
compleatang has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
compleatang has joined #ipfs
Score_Under has joined #ipfs
thomasreggi has joined #ipfs
<spikebike>
hrmpf
<spikebike>
trying to find the new tagging standard that digikam and piwigo support
ScoreUnder has quit [*.net *.split]
fazo has quit [*.net *.split]
giodamelio has quit [*.net *.split]
qqueue has quit [*.net *.split]
akhavr has quit [*.net *.split]
Guest62677 has quit [*.net *.split]
reit has quit [*.net *.split]
akhavr1 is now known as akhavr
magneto1 has quit [Ping timeout: 256 seconds]
jager is now known as stormbringer
stormbringer is now known as jager
giodamelio has joined #ipfs
Guest62677 has joined #ipfs
qqueue has joined #ipfs
Guest62677 has quit [Changing host]
Guest62677 has joined #ipfs
pfraze has joined #ipfs
HostFat has quit [Read error: Connection reset by peer]
nessence has quit [Ping timeout: 240 seconds]
<voxelot>
is there a way to add a CORS exception to the daemon for the api when not running the api on a server but in the browser?
voxelot_ has joined #ipfs
devbug has joined #ipfs
voxelot has quit [Ping timeout: 246 seconds]
<voxelot_>
nvm, as long as you except local:8080 and run it on your gateway
simonv3 has quit [Quit: Connection closed for inactivity]
<gatesvp>
@whyrusleeping: tried a couple of the options including junction and mklink and they're not working for me... it's funny because I can `ls` into the directory without issue
<gatesvp>
but when I run `go build` it seems to lose its braing
<gatesvp>
*brain
<whyrusleeping>
wtf
<whyrusleeping>
thats annoying...
<whyrusleeping>
we might just have to use makefiles for windows
<whyrusleeping>
and have it replace all symlinks with windows 'mklink' or something
<whyrusleeping>
this is really interesting because even cross compiling for windows wouldnt have caught the issue
<gatesvp>
Symlinks are _very rarely_ used in Windows. The `mklink` command requires that you be an admin to run, the `junction` command is from the sysinternals tool I downloaded that is probably not on half of the Dev machines at my day job.
<gatesvp>
i.e.: it's a power tool among power tools :)
<whyrusleeping>
lol
<whyrusleeping>
greatttt
<whyrusleeping>
i totally used links in windows when i was gaming to trick steam into using my other hard drive
voxelot has joined #ipfs
voxelot has joined #ipfs
Leer10 has joined #ipfs
Tv` has quit [Ping timeout: 255 seconds]
voxelot_ has quit [Ping timeout: 246 seconds]
<whyrusleeping>
gatesvp: what we could do is have a make script for windows (does windows even have make? crap) that replaces the symlink imports with the real folders they reference
<whyrusleeping>
which kinda sucks, but it should work
<spikebike>
windows doesn't have make afaik
<spikebike>
cygwin can add make of course
Tv` has joined #ipfs
<M-davidar>
whyrusleeping, Re pinning ux, I'd also like an option where you could specify a list of hashes, and it would pin a random subset of blocks without many seeders, up to a storage limit specified by the user
<M-davidar>
Would be really helpful for archives
<spikebike>
M-davidar: maybe something like ipfs pin --least-popular=10 or something
<spikebike>
?
<spikebike>
where 10 = 10%
wasabiiii1 has joined #ipfs
<M-davidar>
Preferably something like limit=10GB
<M-davidar>
since 10% might be a huge amount of data
<spikebike>
seems like that would be better in ~/.ipfs/config for similar
wasabiiii has quit [Ping timeout: 272 seconds]
<M-davidar>
Sure, my point being that the user can control it somehow
M-davidar is now known as davidar
<spikebike>
sure, just seems like pinning is an interactive decision
<spikebike>
and often changed
<spikebike>
total disk used not so much
devbug has quit [Ping timeout: 250 seconds]
<davidar>
Sorry, I didn't mean total disk used, I meant how much space the user is willing to donate to hosting part of that specific dataset
<gatesvp>
@whyrusleeping: had an IRL thing to do... so back in Windows-land, `symlink` seems unreliable within the `go build` and `make` doesn't work on Windows at all.
<davidar>
As in, here's a 10TB dataset, but I only want to host 50GB of it
<davidar>
And let a bunch of other people host the rest
<gatesvp>
@whyrusleeping: even trying to replace symlink with the windows `mklink` version seems to fail so I'm investigating further options
<davidar>
seems whyrusleeping is sleeping again...
<davidar>
We need a bot that tells us when people's sleeping hours are
<davidar>
Maybe just: timebot, what time is it for davidar?
<gatesvp>
So the good news is that I can manually replace the symlink with the appropriate code and everything builds...
<gatesvp>
so it's really just this one spot
<davidar>
multivac, how would you feel if I scooped out your brain and replaced it with sopel.chat?
<multivac>
I don't know
<davidar>
Good answer
shea256 has joined #ipfs
shea256_ has quit [Read error: Connection reset by peer]
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
voxelot has quit [Ping timeout: 246 seconds]
voxelot has joined #ipfs
devbug has joined #ipfs
Tv` has quit [Quit: Connection closed for inactivity]
wasabiiii has joined #ipfs
twistedline has quit [Ping timeout: 246 seconds]
wasabiiii1 has quit [Ping timeout: 240 seconds]
Quiark has quit [Ping timeout: 256 seconds]
Quiark has joined #ipfs
<borgtu>
lgierth: I already wrote them after the it happened, told them what it was and gave them af reference for ipfs.io. I don't think they will fix anything tho' :)
<whyrusleeping>
gatesvp: sorry, was out for a little bit
<whyrusleeping>
davidar: re: pinning least popular, thats an interesting idea. i like it
mildred has joined #ipfs
<whyrusleeping>
although it shouldnt be part of the pin command
<whyrusleeping>
it should just be its own thing, probably even its own tool that just uses the api
<ipfsbot>
[go-ipfs] jbenet pushed 1 new commit to master: http://git.io/vZNqu
<ipfsbot>
go-ipfs/master c6166e5 Juan Benet: Merge pull request #1622 from ipfs/feat/rm-worker...
pfraze has quit [Remote host closed the connection]
Leer10 has quit [Ping timeout: 240 seconds]
JohnClare has joined #ipfs
Leer10 has joined #ipfs
JohnClare has quit [Ping timeout: 252 seconds]
mildred has quit [Ping timeout: 255 seconds]
voxelot has quit [Ping timeout: 265 seconds]
voxelot has joined #ipfs
Vendan has quit [K-Lined]
Vendan has joined #ipfs
mildred has joined #ipfs
wasabiiii1 has joined #ipfs
wasabiiii has quit [Ping timeout: 240 seconds]
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
mildred has quit [Ping timeout: 264 seconds]
JohnClare has joined #ipfs
<cryptix>
good mooorning
JohnClare has quit [Ping timeout: 250 seconds]
qqueue has quit [Ping timeout: 264 seconds]
<whyrusleeping>
cryptix: g'mornin! i'm just about off to bed :)
<whyrusleeping>
goodnight!
qqueue has joined #ipfs
<davidar>
jbenet, awake?
<cryptix>
whyrusleeping: cya :))
dignifiedquire has joined #ipfs
voxelot has quit [Ping timeout: 240 seconds]
rendar has joined #ipfs
iHedgehog has joined #ipfs
Guest62677 has quit [*.net *.split]
dignifiedquire has quit [*.net *.split]
giodamelio has quit [*.net *.split]
<Vylgryph>
Is there any way to view a list of objects that ipfs has stored locally?
Vylgryph is now known as Vyl
<Animazing>
yeah I wondered that myself yesterday; I kinda expected ipfs ls to do that if you didn't give it any argumetns
iHedgehog has left #ipfs [#ipfs]
Qwertie has joined #ipfs
<Qwertie>
Hi o/
wasabiiii1 has quit [Ping timeout: 246 seconds]
samiswellcool has joined #ipfs
wasabiiii has joined #ipfs
<Qwertie>
If you put a page on ipfs is it possible to edit it later or is it stuck there forever?
dignifiedquire has joined #ipfs
giodamelio has joined #ipfs
Guest62677 has joined #ipfs
<cryptix>
Qwertie: since everything is adressed by its content, you need to add the data again and use the new hash to access the updated version
<cryptix>
you can use ipns on top though to keep the link the same and update the hash it points to
iHedgehog has joined #ipfs
iHedgehog has left #ipfs [#ipfs]
<Qwertie>
Is there any page explaining how ipns works?
dignifiedquire has quit [Quit: dignifiedquire]
<Animazing>
I think the paper addresses it Qwertie
<Qwertie>
I will check that out now then :D
ygrek has joined #ipfs
gatesvp has quit [Ping timeout: 246 seconds]
mildred has joined #ipfs
dignifiedquire has joined #ipfs
twistedline has joined #ipfs
thomasreggi has quit []
JohnClare has joined #ipfs
<davidar>
We really need a thing on ipfs.io saying "this page is hosted on ipfs, and here's how it works"
<davidar>
In fact the whole landing page desperately needs an overhaul
<davidar>
I think that would resolve 90% of people's confusion
<M-amstocker>
I agree ^ The videos are great but I think some people with only a passing interest won't watch
j0hnsmith has joined #ipfs
<davidar>
amstocker, yay, chat.ipfs.io? :)
<M-amstocker>
yeah im on it right now
<davidar>
Thought so
<M-amstocker>
is this your project?
<davidar>
I put it onto ipfs, but most of the credit goes to vector.im and matrix.org
atomotic has joined #ipfs
<Animazing>
Does ipfs cache (popular) random blocks on disk or does it only try to retrieve blocks that somebody specifically requested
<cryptix>
Animazing: nope - not yet
<Qwertie>
What stops people from filling ipfs with TBs of useless data?
<M-amstocker>
David: it's cool!
<M-amstocker>
reading up on matrix now
<Animazing>
cryptix: that probably means that there is no plausible deniablity for files?
<spikebike>
I kinda like that nothing in on my IPFS node except for content I want
<cryptix>
Qwertie: the public gateway garbage collects from time to time
<Animazing>
If I host the ttip files if they leak will it be a DMCA risk to host them?
<spikebike>
I would however like a cache/proxy/browser plugin that would cache content I browse so that anyone on the planet could get to it even if the original server was down
<davidar>
amstocker, also, if you have any ideas for how to improve ipfs.io for new users, I'd love some help putting something together
amstocker has joined #ipfs
<cryptix>
Animazing: the public GWs have a dmca blacklist iirc but be aware that ipfs is not anonymous
<Qwertie>
spikebike, That would probably reveal a list of everything you have opened
<spikebike>
Qwertie: ya.
<spikebike>
everything not over SSL anyways
JohnClare has quit [Remote host closed the connection]
<Animazing>
cryptix: yeah I see nodes can be resolved to IP. It's just that if you were to have a cache of popular blocks locally this could mean that you could be hosting files without you knowing it; making it less of a risk to host it if you were knowing it
<davidar>
Animazing, ipfs isn't really good for hosting anything you wouldn't be willing to host on a normal web server (by design I suspect)
<davidar>
Also, ipfs never hosts anything without your consent
<M-amstocker>
David, I would definitely be willing to help
<cryptix>
spikebike: like the idea too - maybe a instapaper-on-ipfs for starters, i share Qwertie's privacy concerns
<M-amstocker>
not that i'm some ux expert
<Animazing>
yeah looks like it; and I'm ok with that. Just trying to figure out where to put ipfs on the spectrum :)
<Qwertie>
Cant you request a file from someone else even if they dont have it? Or is that just the public nodes?
<davidar>
amstocker, me either, mostly interested in the content rather than making it look pretty
wasabiiii1 has joined #ipfs
<spikebike>
cryptix: ya, I'd want a whitelist/blacklist for what's in the proxy. Nothing with my bank, taxes, insurance, healthcare, etc. But happy to share anything over http from youtube, slashdot, hacker news, weather sites, etc.
Beer has joined #ipfs
<Beer>
Morning
wasabiiii has quit [Ping timeout: 244 seconds]
<cryptix>
spikebike: yup! id also like to throw in youtube-dl so that you dont just cache the youtube page but the video as well
<spikebike>
well I'd hope youtube would just connect to IPFS, seems like it would be good to them as long as the ads are embedded.
<cryptix>
Qwertie: if some node on the swarm has added it, and you know the hash, you can get it
<spikebike>
IPFS could make for a very cool CDN
<cryptix>
spikebike: wont happen :)
<cryptix>
well.. would be nice
<Qwertie>
Just wondering if someone could make your node start hosting copyrighted files even if you never requested them
<cryptix>
Qwertie: no - your node just stores data that you requested
ianopolous2 has quit [Ping timeout: 265 seconds]
<cryptix>
at some point there could be nodes that look for frequently requested blocks but thats not there yet afaik
<spikebike>
if even 1 in 1000 clients were ipfs enabled any popular content would get in directly
<amstocker>
davidar, where should I put some ideas for the ipfs.io landing page? submit an issue on github?
<cryptix>
amstocker: yea gh/ipfs/website
<spikebike>
instead of trying to guess what other clients are asking for
<davidar>
cryptix, I've discussed the CDN idea with jbenet before
chiui has joined #ipfs
<davidar>
er, spikebike
<cryptix>
spikebike: id be cautious with guessing techniques too - a shared proxy would grow a cache much more natural, i guess. just saying that nothing stops you from building a node that does just that :)
<davidar>
amstocker, yeah, what cryptix said, cc me and I'll add some thoughts too
<spikebike>
populating a CDN by proxy efficiently would require #1) an accurate and unspoofable world few of traffic #2) an accurate and unspoofable world view of how many IPFS nodes have a copy
<davidar>
+1 for proxy-to-ipfs
<spikebike>
and would give you much the same as a stupid/naive proxy/cache
<davidar>
spikebike (IRC): filecoin.io is supposed to solve 2
<davidar>
I've also been thinking about the first one
<spikebike>
yeah, I'm quite dubious of that approach
<spikebike>
when I hear block chain I think "slow as shit"
<davidar>
spikebike (IRC): the data itself isn't stored on the block chain
<davidar>
And bitcoin just happens to have a rather slow block chain
<spikebike>
davidar: anything I can imagine storing in the blockchain would still be hugely expensive
<spikebike>
davidar: ya, but there's like 0.0001% as many bitcoin transactions as cdn operations
<spikebike>
or cache operations
<spikebike>
or bandwidth, whatever.
<spikebike>
not sure why something much more simple can't work just as well
<spikebike>
like say bittorrents tit-for-tat
<davidar>
That's bitswap
<spikebike>
why can't my IPFS node and that 30-40 peers trade storage/bandwidth/cpu as needed among themselves
<davidar>
It doesn't solve the problem of proving nodes have a copy of data efficiently though
<spikebike>
instead of trying to make a global ledger to try to keep the same metrics fair.
<davidar>
That's what the block chains for
<davidar>
That's also planned
<davidar>
"IPFS cluster"
<spikebike>
say there's 1M IPFS nodes, with conservatively 1B files, that talk to 10 clients each every day
<spikebike>
what parts do you think would be reasonable to keep in that ledger?
<spikebike>
and how would it be bad if a node did the equivalent of a double spend in bitcoin?
<davidar>
I don't know, ask jbenet ;)
<spikebike>
A single raspberry pi using traditional methods could process the planet's worth of bitcoin transactions
<spikebike>
namecoin seems elegant and attractive... if you ignore performance
<spikebike>
which is a biggy, especially for IPFS which is otherwise quite performant
<davidar>
The point I'm trying to make is that these are all issues that people have been thinking about, so try searching the irc logs or github if you want to know more
<spikebike>
I'm not an expert, but I ahve been reading the docs and watching the irc channel for many months
amstocker has quit [Ping timeout: 246 seconds]
<davidar>
Yes, I agree that block chains aren't ideal, but there needs to be some way for everyone to agree on certain things
<spikebike>
ipfs as a cdn seems reasonable, file sharing seems reasonable, proxy/browser plugin seems reasonable
<Qwertie>
Im getting "ERR_CONNECTION_REFUSED" when connecting to the webui
<Qwertie>
Do I need to start it after running init?
necro666_ is now known as necro666
<spikebike>
davidar: not sure on the namecoin, especially for content addressable storage, if the checksum is right you know you got the right value
<spikebike>
Qwertie: did you start teh daemon?
<Qwertie>
Just ran ipfs init
<spikebike>
run ipfs daemon
<spikebike>
give it 30-60 seconds, then try the web ui again
<Qwertie>
Seems to be working
<davidar>
spikebike, I was talking specifically about how to prove a node is storing some data without having to download it all and verify
<davidar>
So that you don't have malicious nodes pretending to
<spikebike>
davidar: my favorite method for that is do ask for a small fraction of it, big random ranges, and ask for a checksum of the range
<davidar>
I think that's what file coin does...
<spikebike>
say some node is hosting 1M files, once a day pick 0.1% and 2 random numbers and ask for the checksum of that range of that object
<spikebike>
davidar: so every ipfs add requires adding the name and checksum to the global blockchain?
<spikebike>
only thing I can thing of that could be reaosnable for namecoin is the equivalent of a domain registration
<davidar>
No, just the checksum proofs you mentioned
<Beer>
anyone have an IPFS hash I can try connecting to?
<davidar>
Beer, as in, something hosted on ipfs?
<Beer>
#ipfs: yep
<Beer>
uh, davidar: yes
<Qwertie>
Is it possible to use ipfs without port forwarding?
<Beer>
ID QmYumn1eZZv3AXPNibQoiBapmb7XS3CXUbrkCSGQNQHfiA up 1342 seconds connected to 0:
<spikebike>
that should just show node info and addresses
<Qwertie>
On the connections tab
<Qwertie>
I did get foo.avi though
<d11e9>
hey all i know windows is not officially supported but was wondering if anyone has found a way around not being able to run ipfs a second time? I've been having to re init each time.
danslo has quit [Remote host closed the connection]
JohnClare has joined #ipfs
<reit>
or multimulti and encapsulate everything ;)
herch has joined #ipfs
qqueue has quit [Ping timeout: 252 seconds]
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
qqueue has joined #ipfs
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
<herch>
I am interested in to know what people are working on with IPFS. Not the actual IPFS development itself, but rather uses of IPFS replace existing web sites.
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
JohnClare has quit [Ping timeout: 246 seconds]
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
Leer10 has quit [Read error: Connection reset by peer]
Beer has joined #ipfs
apophis has joined #ipfs
<VictorBjelkholm>
herch, I'm currently building a twitter clone actually!
tsenior`` has joined #ipfs
<herch>
VictorBjelkholm: Wow! if you are writing it on github I will like to note it down. I am collecting all the interesting stuff that people are doing using IPFS.
<VictorBjelkholm>
I'm not completely sure about what I'm doing, but that's what I'm trying to build. It's kind of shitty though since I basically hit random peers to see if they are using my twitter clone, and also since I need to poll all the people you "follow" for updated messages. But it's ticking along :)
ygrek has joined #ipfs
<VictorBjelkholm>
herch, right now, no. I'll write here once I have something to share
apophis has quit [Read error: Connection reset by peer]
<herch>
VictorBjelkholm: Sure, no problem.
apophis has joined #ipfs
qqueue has quit [Ping timeout: 246 seconds]
qqueue has joined #ipfs
<fazo>
VictorBjelkholm: you built a twitter clone using ipfs? :O
<solarisfire>
Yeah, is that not a little too dynamic for ipfs?
<VictorBjelkholm>
fazo, building! But yeah, I am!
pfraze has quit [Remote host closed the connection]
<VictorBjelkholm>
solarisfire, nah, I think it's gonna work out in the end. But, it's merely a fun experiment, to see if it's possible :)
<fazo>
VictorBjelkholm: I think I have an idea on how to make it work
<VictorBjelkholm>
version 0.1 went fine, had some syncing issues so doing a refactor now, putting pieces where they are supposed to be, so I can open source it
<VictorBjelkholm>
fazo, if you know a way that doesn't involve polling, PLEASE tell me!
pfraze has joined #ipfs
<fazo>
VictorBjelkholm: what if you save to localstorage the ipfs address of a json file with the ipfs id of your followers/followed and an array of refs of all the tweets
<fazo>
your tweets, not the others
<fazo>
then you make a IPNS name associated to an object
<VictorBjelkholm>
fazo, I'm kind of doing something like that, but in FS instead
<VictorBjelkholm>
and in a backend server that runs in a container
<fazo>
which is like a db for the user
<fazo>
yeah that would work of course
<fazo>
but I think there's a way to make it not rely on external services
<fazo>
each user is identified by his ipfs id
<fazo>
the public key
<fazo>
then, you get the hash associated to the "twitter-app-db" name of that user
<fazo>
which contains his profile and his tweets
<fazo>
I think it makes sense
apophis has quit [Quit: This computer has gone to sleep]
<fazo>
the list of followers and followed you save it to the browser's local storage and create some way to export it in the app
<fazo>
this way you can instantly see new tweets in the app
<fazo>
and it only relies on IPFS and IPNS
qqueue has quit [Ping timeout: 246 seconds]
qqueue has joined #ipfs
<solarisfire>
Currently running an experiment for another possible example... hope it works :-)
<VictorBjelkholm>
fazo, don't worry, I've already figured that part out :) The only thing I'm waiting for now is to have aggregation and/or pub/sub
<VictorBjelkholm>
fazo, you exactly described how I do it, but I'm doing it in a backend, not local storage
<solarisfire>
Wonder if ipdb could be on the cards, or is that too much? XD
<fazo>
VictorBjelkholm: cool :)
<fazo>
VictorBjelkholm: do you have a github repo?
<VictorBjelkholm>
fazo, as I said before, not completely ready to publish the code, I'll write here once it's done, should be now during the week or weekend
<solarisfire>
hugo
<fazo>
VictorBjelkholm: too bad! Really wanted to have a look
<fazo>
VictorBjelkholm: may I ask how you implemented the backend?
shea256 has quit [Remote host closed the connection]
shea256 has joined #ipfs
<davidar>
if you guys have any more ideas about how to implement dynamic sites on ipfs (especially dealing with user-submitted content), please add them to that issue
<VictorBjelkholm>
fazo, give me your username and I'll invite you to the prototype
<lgierth>
fazo: you can host everything entirely in ipfs
<lgierth>
ipfs.io for example is a static page hosted outside of ipfs
<lgierth>
*out of ipfs
<VictorBjelkholm>
fazo, if you tried it and you couldn't get it to work, care to open up an issue in the repo?
<fazo>
VictorBjelkholm: I didn't try it, I was just under the impression that the js implementation doesn't work
<fazo>
VictorBjelkholm: I'm happy I was wrong then!
<fazo>
VictorBjelkholm: anyway, later I'm going to build a file manager that lets you save bookmarks in the browser local storage
<fazo>
VictorBjelkholm: so you can bookmark data you see and like
<VictorBjelkholm>
fazo, ooh, that's a good idea! Love to see it
<fazo>
VictorBjelkholm: yeah, another thing I wanted to add is the ability to choose which ipfs hosted app to use to open media files
<fazo>
VictorBjelkholm: for example is a file is a video it could open it with that video player hosted on ipfs
<fazo>
VictorBjelkholm: the problem is figuring out what the file actually is, I don't think there's an easy way to do that
<herch>
Is clone of Pinterest possible?
<VictorBjelkholm>
herch, why not?
<VictorBjelkholm>
fazo, there is tools to try to figure out the filetype from the data itself. I know the standard library in golang have something like that, probably exists many solutions for it
<samiswellcool>
I'm kicking around a little cli app for storing the files you publish, generating a html page detailing all of them and their versions and then setting your ipns to that page. I plan to use it to keep track of front end projects and their ipns previous versions and links
<herch>
No i am just trying to figure out what's possible and what's not. e.g. I haven't yet figured out how would you implement a nosql db on IPFS and run queries on it
<VictorBjelkholm>
and also there was some talk about getting the IPFS.io gateway to get better at it, possibly write a new one
<fazo>
samiswellcool: that's awesome
<herch>
something that competes with Amazon dynamodb
<VictorBjelkholm>
herch, use IPNS and something like Sqlite and it's done :)
<fazo>
herch: like VictorBjelkholm said, you update the ipns name to the hash of the new db every time the db changes. Then it's always available via ipfs at the same name
besenwesen has quit [Ping timeout: 255 seconds]
besenwesen_ has joined #ipfs
<herch>
VictorBjelkholm, fazo : ok
<fazo>
but what happens when you $.ajax a directory?
<herch>
but it should be simpler, like someone should be just able to createdb = new db() and it should work
<fazo>
herch: libraries can be built to do something like that, we're still in the early days
<herch>
fazo : I know :)
<herch>
fazo: just throwing things out here, thinking out loud
wasabiiii1 has joined #ipfs
pfraze has quit [Remote host closed the connection]
wasabiiii has quit [Ping timeout: 252 seconds]
vijayee_ has joined #ipfs
<herch>
we definately need good apps on IPFS, but we should focus more on libs. This infra will allow a lot many people to have lot many apps that we cannot even imagine.
shea256 has quit [Remote host closed the connection]
<screensaver>
SqLite? I believe NeDB even runs in a browser
<achin>
om nom nom. the ipfs daemon is currently using about 13G memory (as i am adding about 300MB of data)
<blame>
You would want a database method that knows about the merkledag, so you can minimize the costs of updates. But honestly, a flatfile with ipns pointer does the trick.
apophis has quit [Ping timeout: 244 seconds]
apophis has joined #ipfs
<VictorBjelkholm>
herch, I'm not sure I agree. The need for libs comes from the need of apps. Without building apps, you don't know what the lib needs
sseagull has joined #ipfs
apophis is now known as cdslayer
Tv` has joined #ipfs
<Bat`O>
VictorBjelkholm: it's never too early to open-source something
<VictorBjelkholm>
well, the thing has to be working properly before :)
shea256 has joined #ipfs
screensaver has quit [Ping timeout: 246 seconds]
voxelot has joined #ipfs
mildred has quit [Ping timeout: 250 seconds]
shea256_ has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256_ has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
besenwesen_ is now known as besenwesen
besenwesen has quit [Changing host]
besenwesen has joined #ipfs
devbug has joined #ipfs
nessence has joined #ipfs
screensaver has joined #ipfs
wasabiiii has joined #ipfs
<screensaver>
Q: are there plans for a Decentralized GitHub? In May I read this piece http://blog.printf.net/ but I'm not sure if this is a duplication of IPFS?
wasabiiii1 has quit [Ping timeout: 260 seconds]
devbug has quit [Ping timeout: 250 seconds]
<fazo>
screensaver: a decentralized github is relatively easy to build on ipfs, provided that ipns works
<Bat`O>
VictorBjelkholm: that's too much teasing :p
qqueue has quit [Ping timeout: 240 seconds]
<VictorBjelkholm>
Bat`O, hah, there is never too much teasing! No, but I'll open it up tonight, once I stop work on day-stuff
cdslayer has quit [Quit: This computer has gone to sleep]
<achin>
jbenet: virt is still at 13g (res is at about 5g)
qqueue has joined #ipfs
<jbenet>
that's terrible.
<achin>
oh man. the strack trace is huge, and blew past the 1800 line tmux buffer i have. will only the last 1800 lines be useful at all?
<voxelot>
how do I add a CORS exception to the api on my gateway? locally this works export API_ORIGIN="http://localhost:8080"
<voxelot>
my gateway is on /ip4/158.69.62.218/tcp/80 but export API_ORIGIN="http://158.69.62.218:80" is 403
<voxelot>
not sure if this is even supposed to work, running a twitter wall made in angular all client side with node api, hashed then ran on a gateway to serve the api requests
<VictorBjelkholm>
voxelot API_ORIGIN should be the domain you're making the requests from, not the destination
<voxelot>
crap
<voxelot>
the request is coming from random browsers
<VictorBjelkholm>
also, useful while developing might be to set API_ORIGIN to *
<VictorBjelkholm>
voxelot, but the browsers accessing an address, no?
<VictorBjelkholm>
I mean, if you have your frontend on my_frontend.com and your daemon on whatever, you'll need to set API_ORIGIN to my_frontend.com
<VictorBjelkholm>
lgierth, if you're allowing localhost and such, why not just set * ???
<VictorBjelkholm>
yeah, I think that's easier, especially when you want anyone, anywhere to be able to access it
<voxelot>
thanks guys, let me try it
<lgierth>
VictorBjelkholm: because * includes everything, and we're not 100% sure yet that's the best idea
<VictorBjelkholm>
voxelot, the export command above should be 'export API_ORIGIN='*'' FYI
<voxelot>
yup got it haha ty
<VictorBjelkholm>
lgierth, well, you already have localhost which basically can be anything. So my feeling is that you don't really want to limit it, hence * makes sense
<VictorBjelkholm>
sucks to have the nickname So btw
<lgierth>
no, localhost is localhost
<lgierth>
not sure what you mean with localhost being potentially anything
<lgierth>
yeah as i said we'll probably allow * eventually
<lgierth>
or soon
<lgierth>
who knows :)
<VictorBjelkholm>
well, I was thinking slightly wrong but my thinking was that if I wanted to access `API` directly from the browser, from Trello, I could just proxy Trello through localhost
<VictorBjelkholm>
but anyways, slightly wrong thinking so ignore me politely...
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
<fazo>
VictorBjelkholm: I just got the barebones interface to add and remove bookmarks to work. It also saves them to localstorage
<fazo>
jbenet: and now, they're on the permanent web.
<jbenet>
Boom.
<achin>
so in total, those RFCs on disk take up about 300MB, but if you sum up the link sizes of that root block, it's about 500MB
<jbenet>
yay dedup
<jbenet>
interesting... i would've expected text to not dedup that much
<jbenet>
maybe the pdfs?
<lgierth>
or MUST and SHOULD
<lgierth>
:)
<achin>
someone pinned a previous version of that collection which has the .txt, but not the PDFs
<ion>
I ^C’d an “ipfs pin add -r QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ” (due to bandwidth issues) and even restarted the daemon, but the daemon still keeps downloading the blocks. What to do?
<achin>
so when you pinned this new one, it should have only needed to download the .pdf and .ps files
<lgierth>
ion: ipfs pin rm
<whyrusleeping>
ion: if you restarted the daemon, it shouldnt be downloading more blocks
<fazo>
ion: that's why you dont pin data from strangers lol
<whyrusleeping>
you can see what blocks your daemon is trying to fetch by running 'ipfs bitswap wantlist'
<lgierth>
whyrusleeping: not even for pins?!
<achin>
(also, /me is eminence on github)
<jbenet>
achin: ah nice! do you have a twitter handle?
<jbenet>
i just tweeted out about it and want to thank you
<whyrusleeping>
lgierth: nope, a block wont be pinned until it is fully fetched. so if you kill a daemon in the middle of the process, you wont have a pin
<lgierth>
oh ok
pinbot has quit [Ping timeout: 244 seconds]
<achin>
i do, but since i never use it, it's as if i don't exist on twitter
<jbenet>
ion: ^C attempts graceful teardown. do it again for full shut doen
<jbenet>
ok
<fazo>
if I download some file using go-ipfs
<fazo>
does it use multiple sources to speed up the download yet?
<fazo>
or is that not yet implemented
apophis has joined #ipfs
<drathir>
fazo: im guess it using the closer one location onlt atm...
<drathir>
only*
<ion>
The wantlist is empty. When i run ipfs repo gc repeatedly it find blocks to remove every time. When i look at the blocks being added in ls -lrt ~/.ipfs/blocks/*/* they are all RFC stuff. I have verified that no ipfs processes are running and restarted the daemon.
<achin>
RIP pinbot
<jbenet>
hahaha yeah
<jbenet>
lgierth: pinbot died. should i reboot it?
<VictorBjelkholm>
solarisfire, regarding hugo, add -r the directory and use that instead and it will probably work
qqueue has joined #ipfs
<VictorBjelkholm>
"Path Resolve error: no link named "about.html" under QmeQT76CZLBhSxsLfSMyQjKn74UZski4iQSq9bW7YZy6x8" would be fixed by that I think, for example
<solarisfire>
Problem is you have to know the directory before you run hugo...
<solarisfire>
But you add the files after...
<achin>
i am starting to see other providers for some of these .pdf files, so it looks like pinbot did his job
<solarisfire>
how do you check that?
<jbenet>
solarisfire: i ran into this, hugo then had no way of doing properly realtive links. not sure if fixed i posted an issue
<solarisfire>
Or I could rewrite their themes to be ipfs friendly...
<whyrusleeping>
jbenet: uh oh. repro?
<jbenet>
that would be a better solution. but prob easier to start with the script (80% solution), then move to that
<drathir>
jbenet: any info about default ports colision with iperf? ideas on which one to change?
<jbenet>
whyrusleeping just calling ipfs refs -r QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ on a machine with 512MB ram
<jbenet>
drathir yeah we'll change them, not sure yet.
<solarisfire>
if I do ipfs refs -r on a big directory it just kills my box...
<whyrusleeping>
jbenet: its because the networks grown
<jbenet>
we thought of 4737 for swarm protocol
<jbenet>
(ipfs in dial pad)
<solarisfire>
I see a ton of iowait, my queues grow to insane sizes, and my console freezes...
<jbenet>
whyrusleeping :/ udp time.
wopi has quit [Read error: Connection reset by peer]
<whyrusleeping>
yeah.
<drathir>
jbenet: let me know on which one port to change i will try to make PR...
<whyrusleeping>
tcp was fun...
wopi has joined #ipfs
qqueue has quit [Ping timeout: 240 seconds]
<jbenet>
drathir: need to settle on all ports first, i'll post on go-ipfs
wasabiiii has quit [Ping timeout: 264 seconds]
wasabiiii1 has joined #ipfs
qqueue has joined #ipfs
jeffreyhulten has joined #ipfs
<ion>
When initializing an ipfs repo, why isn’t just /ipfs/QmVtU7ths96fMgZ8YSZAbKghyieq7AjxNdcqyVzxTt3qVe pinned recursively but also all the files under it?
<drathir>
jbenet: ok thank a lot, and sorry for extra task adding...
wasabiiii1 has quit [Ping timeout: 252 seconds]
apophis has quit [Ping timeout: 255 seconds]
<jbenet>
drathir: no thank you this needs to get done
apophis has joined #ipfs
<jbenet>
ion: i think it's because of the code. it should be just pinned recursively
wasabiiii has joined #ipfs
<samiswellcool>
Think after I've finished the first bit of my ipfs project tracker, I'll be putting together a list of ideas in case I never get around to making any of them. I'm trying to write stuff down as I think of it
<solarisfire>
giodamelio: and baseURL is ignored by some themes so depends which one you use...
<giodamelio>
solarisfire: relativeUrls is a config options(only in the newist version I think), as for the baseUrl thing, I guess you will have to try theme by theme. I am using hyde-x and it works fine.
<solarisfire>
giodamelio: yeah hyde is one of the ones it works in... I used "go get" so I'm pretty sure I'm on the latest version??
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<solarisfire>
solarisfire@ipfs:~/hugo$ hugo version
<solarisfire>
Hugo Static Site Generator v0.15-DEV BuildDate: 2015-09-16T15:06:45+01:00
shea256 has quit [Remote host closed the connection]
<giodamelio>
Ya, then as long as the template is good, it shoud work.
tsenior`` has joined #ipfs
j0hnsmith has quit [Quit: j0hnsmith]
dignifiedquire has quit [Quit: dignifiedquire]
jhulten has quit [Quit: WeeChat 1.3]
<fazo>
It works
<fazo>
I wrote an ipfs bookmark manager
<fazo>
UX horrible atm though
<whyrusleeping>
fazo: ipfs bookmark manager??
<fazo>
if anyone wants to try it: QmSyURHE4qBKWCNYoX51B6VRMMfFS4HBhbjKY6vSnJSByN
* whyrusleeping
has wanted that for a while...
<fazo>
yes it's a small web app
<fazo>
very simple actually
<fazo>
hacked toghether in 1 hour with questionable skills forgotten during the last months
<fazo>
but it works
<whyrusleeping>
ah neat
<fazo>
I'm hosting it on a shitty connection though, it will take a while to load
<whyrusleeping>
already got it, no worries
<whyrusleeping>
(the network appears to be a little less congested this morning)
qqueue has quit [Ping timeout: 264 seconds]
<fazo>
damn that was fast, it took 50 seconds to upload it to my server
<giodamelio>
It only took a second for me
<fazo>
it means the network is working very well
<fazo>
or you are all using the default gateway which has it cached
<fazo>
by the way you can see the github page if you click on the title
herch has quit [Ping timeout: 246 seconds]
<Vyl>
Never got an answer to my question... is there any way to view a list of what's stored on my node?
<fazo>
Vyl: yes, ipfs ls
<fazo>
no wait
<fazo>
that's wrong
<Vyl>
Tried that.
<Vyl>
I'm giving some thought to indexing ideas.
<whyrusleeping>
Vyl: 'ipfs refs local'
<fazo>
Vyl: if you want to see pinned stuff (stuff that wont be deleted when you run ipfs gc)
<fazo>
Vyl: just run ipfs pin ls --type all
atrapado has joined #ipfs
jhulten_ has joined #ipfs
<Vyl>
ipfs refs local... thanks.
jhulten has joined #ipfs
jhulten_ has quit [Client Quit]
<whyrusleeping>
Vyl: yeah, that shows every block that is on your disk (except for a few that dont show up due to a bug :/ )
<Vyl>
I'm going to write a quick perl script that will try to make sense of it.
wopi has quit [Read error: Connection reset by peer]
<Vyl>
Help tell you a bit about what you have.
<jbenet>
Vyl: awesome! that would be very useful.
wopi has joined #ipfs
<jbenet>
there's a "find roots" script that whyrusleeping wrote-- let me find it for you
shea256 has joined #ipfs
<Vyl>
That might save me some time.
<whyrusleeping>
jbenet: maybe ipfs refs local should take a --roots arg that essentially runs that script?
<jbenet>
whyrusleeping +1
<jbenet>
whyrusleeping i'm looking for it in your gists-- is it public?
<whyrusleeping>
although, if you make a node that links to everything else, then only that one will show up, lol
<whyrusleeping>
uhm... maybe?
<whyrusleeping>
one sec...
<achin>
whyrusleeping: jbenet: i wonder if it would be possible to have keybase.io record/prove your ipfs peerID
<whyrusleeping>
achin: i have wanted that ever since i started using keybase
<jbenet>
achin yes, but please dont get too tied to ipfs ids yet, we may have to change the storage format of keys which would change the output hash
<jbenet>
whyrusleeping: ah thanks
<jbenet>
Vyl o/ o/ o/ --- what it's doing is looking for refs that aren't linked to by anything else, hence "find roots
<whyrusleeping>
i need to update that code, it wont build
<whyrusleeping>
gimme a sec
<jbenet>
whyrusleeping also change the roots[u.Key(lnk.Hash)] = false to a delete(roots, u.Key(lnk.Hash))
<jbenet>
else you're going to OOm
M-atrap has joined #ipfs
<achin>
you could sign a message using your peer private key and post it at /ipns/peerid/proof.txt which keybase could then verify
<achin>
er, not your peer private key, but your keybase private key
shea256 has quit [Ping timeout: 256 seconds]
<whyrusleeping>
there, updated the gist
thomasreggi has quit [Read error: Connection reset by peer]
<jbenet>
achin: nice!
nausea is now known as neofreak
<jbenet>
whyrusleeping: delete?
thomasreggi has joined #ipfs
<whyrusleeping>
changing now...
<jbenet>
oh i see, wont work
<whyrusleeping>
?
<whyrusleeping>
why not?
<jbenet>
because it may be added back later
<whyrusleeping>
ah, yeah.
<whyrusleeping>
i thought there was a reason i did it that way
<fazo>
is running multiple nodes with the same keypair supported?
<whyrusleeping>
(good thing i checked irc again before pushing that commit)
<whyrusleeping>
fazo: no
<fazo>
whyrusleeping: yeah, I figured. Just wanted to be sure :)
<ion>
I initialized a fresh ipfs repo, ran the daemon, downloaded /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ/rfc11.pdf and ran watch ipfs bitswap stat. After the download had finished (and the wantlist was empty), “blocks received” and “dup blocks received” kept growing for quite a while, meanwhile iftop showed the daemon was downloading from *.i.ipfs.io at 100 to 500 kB/s. It stopped around 120
<ion>
duplicate blocks the first time. I ran a GC and downloaded the same file again. The same thing happened and it’s downloading duplicate blocks from 162.243.248.213 and 104.236.32.22, 400 dup blocks and counting.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
ion: yeah.... bitswap is really dumb right now
<Vyl>
Hmm... going to have to reimpliment that script, I do not know the language. But I get the gist of it.
<whyrusleeping>
we made it dumb so that file transfers would work while we built out the rest of the system
<rendar>
what is bitswap used for?
apophis has quit [Quit: This computer has gone to sleep]
Score_Under has left #ipfs [#ipfs]
wasabiiii has quit [Ping timeout: 260 seconds]
jfis has quit [Ping timeout: 264 seconds]
wasabiiii has joined #ipfs
<achin>
for moving bits around between nodes
<whyrusleeping>
^
<whyrusleeping>
lol
<achin>
(see the paper for more details)
<whyrusleeping>
bitswap is the agent in ipfs that decides who to request blocks from, and who to send blocks to
<whyrusleeping>
right now its answers to that are 'everyone' and 'everyone'
<achin>
bits for everyone \o/
<whyrusleeping>
yay!
<whyrusleeping>
jbenet: it seems like my endless daemon shutdown fixes have regressed again
<whyrusleeping>
its not as snappy to shut down as it once was
<ion>
So this explains why i’ve been managing to transfer data at tens of kB/s while the daemon has kept my ADSL connection saturated at hundreds of kB/s. So it’s just a matter of time until a better bitswap implementation exists and these problems are no more?
apophis has joined #ipfs
<whyrusleeping>
yeap!
ianopolous has joined #ipfs
<whyrusleeping>
i thought we would get to doing it by the time it became a problem, but things happenned faster than expected
<ion>
Alright, thanks for the explanation.
<ion>
And thanks for the great work so far.
<whyrusleeping>
ion: you are very welcome :)
shea256 has joined #ipfs
jfis has joined #ipfs
<achin>
if you get ion interested enough, he might even write you an ipfs implementation in haskell!
<ion>
Sure, that seems like easy one-afternoon project, right?
<achin>
sure :)
<whyrusleeping>
jbenet: i dont know if you read backlog, but we broke builds on windows (i broke builds on windows)
<whyrusleeping>
windows cant handle symlinks apparently
shea256 has quit [Ping timeout: 256 seconds]
tsenior`` has quit [Ping timeout: 260 seconds]
<Vyl>
Is there a way to find out if an address is a directory?
<Vyl>
An easy way, I mean.
<achin>
ipfs ls <hash> should do it, i think
<achin>
if you see stuff, it's a directory. else, it's not
<achin>
i suppose this isn't perfect, because the hash could have named links, but not be a directory. but i don't think there is anything that would create those nodes in today's ipfs?
<jbenet>
whyrusleeping great... i havent read backlog because i havent caught up with everything else.
<jbenet>
Vyl: ipfs object get <object> --enc=json shows you the object.
<jbenet>
warning: format will change a bit
<whyrusleeping>
jbenet: yeah... was discussing it with gatesvp last night, i'm hoping he can help us come up with a solution. otherwise i have to revive my windows machine
<Vyl>
QmU2DpieMuxxU8wSv98ZqY3ao7rkbdUcYEfZierDftcPY8 if anyone wants to test it.
<Vyl>
It's an ugly load of perl only, but it'll show you what you have stored, and dump it to files.
bedeho has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
qqueue has quit [Ping timeout: 255 seconds]
qqueue has joined #ipfs
<Vyl>
Mostly for satisfying curiosity.
<Vyl>
Basically just takes 'ipfs refs local', strips out all the directories, and strips out anything that's referenced by another block. So what's lift is just the root blocks for files.
<whyrusleeping>
Pyrotherm: tour.golang.org is really good. but dont worry about the details too much
<edrex>
(at the end I think you make a concurrent webserver
<pjz>
it would be cool if you could use camlistore as a blockstore
rendar has quit [Ping timeout: 255 seconds]
<Pyrotherm>
I've been trying to build a digital media library system for years in fits and starts, but there's always some better tech on the horizon, but THIS... well, ipfs is game-changing
<edrex>
pjz: agreed. For now I've shelved Camlistore, but I think the metadata model is really well designed and interesting.
ianopolous2 has quit [Ping timeout: 260 seconds]
<edrex>
I used it to store my document archive for 3 years
<pjz>
Pyrotherm: I wish ipfs didn't require pulling everything into its own blockstore, though, esp. since it's not amazingly stable.
<Pyrotherm>
my first concern is blockchain size...
voxelot has quit [Ping timeout: 272 seconds]
<edrex>
but avoided using any of the metadata stuff because I knew it wasn't portable
<pjz>
Pyrotherm: there's no blockchain /
<Pyrotherm>
no?
<pjz>
Pyrotherm: there's a content-based merkle-dag, but it's rooted at every node and only refers to that node's contents
qqueue has quit [Ping timeout: 240 seconds]
<Pyrotherm>
each node being a machine on the network?
<edrex>
pjz: you could definitely use camlistore as a blockstore
wasabiiii has joined #ipfs
<edrex>
the trouble is that they have very similar but incompatible data structures
<Pyrotherm>
pjz: what do you mean by "blockstore"?
<edrex>
ie the hash format, sha1-xxxxxx vs 11xxxxx
vijayee_ has joined #ipfs
<edrex>
Pyrotherm: the low-level format treats everything as a content-addressed "block" (ipfs and camlistore both)
rendar has joined #ipfs
<edrex>
a block store is a very simple interface: PUT some data, get back a content address
<Pyrotherm>
so just a KV store then
<edrex>
GET a content address, retrieve the original data
<edrex>
the difference is that the store computes the keys for you
<edrex>
(content addressable)
<Pyrotherm>
right, the central db guaranteed global ID uniqueness
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
ianopolous has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<edrex>
well, because the keys are computed, rather than assigned, you don't need a central authority to "guarantee" (hash collisions are possible) global uniqueness
<edrex>
so you can put the same object in the store on jupiter and inside a black hole, and get the same key
<Pyrotherm>
so, there's possibility my data will collide woth other data?
<whyrusleeping>
Pyrotherm: yeah, a 1/(2^256) chance
<edrex>
oops, change "object" to "block" - objects refer to higher level entities in IPFS (and also in Camlistore)
<Pyrotherm>
I assumed that it was a tiny chance, but what happens in this case?
<Pyrotherm>
whoops your file has been trumped by another?
<achin>
you could probably be semi-famous for finding a SHA256 collision
<whyrusleeping>
Pyrotherm: not sure.
<whyrusleeping>
if you can find a collision, please let me know
<edrex>
Pyrotherm: things break? depends on the implementation
<rschulman>
lol @achin
<edrex>
if you are using a weak hash function, generating collisions is relatively easy
<whyrusleeping>
if there was a collision, you could get the wrong data when requesting a hash
<rschulman>
edrex: IPFS tries not to use those functinos.
<Pyrotherm>
so basically, you could end up where the routing system has a closer node with a different set of data with the same address
<edrex>
but generally people coding for CAS interfaces treat that as an impossibility (which is ok, as long as you are using good hash functions)
<Pyrotherm>
so, the hash function uses seeds that are unique to my machine to some extent, which makes the probability of a collisions that much less?
<edrex>
(that's just cute)
shea256_ has joined #ipfs
shea256_ has quit [Remote host closed the connection]
<achin>
getting a few bits in a hash to match is a far cry from a full collision
shea256_ has joined #ipfs
<whyrusleeping>
achin: no, there has never been one for sha256
<edrex>
Pyrotherm: no, the whole point is the key generation is globally consistent
<edrex>
this allows global deduplication
<whyrusleeping>
the only reason we are slightly worried about sha256, is in the case someone proves it is not a perfect hash
<whyrusleeping>
which *is* much more probably than finding a genuine collision
<Pyrotherm>
ok, so if you have two different sets of data with the same address and you try to replicate that data across the network...
<edrex>
(coupled with content based chunking, which breaks large blocks into small blocks in a deterministic way)
<achin>
is a possible attack on SHA256 likely to impact SHA512 as well? i don't know how similar they are in construction
shea256 has quit [Ping timeout: 240 seconds]
<edrex>
Pyrotherm: they won't have the same address, unless god exists and likes to fuck with cryptographers
<Pyrotherm>
am I just worried about somethign that will never happen? do hard drives or any other kind of supposedly-reliable storage medium use a has function like this?
<edrex>
Pyrotherm: yes, content addressable storage is a robust and widely-deployed technology in the storage industry
<Pyrotherm>
or maybe your CPU becomes quantum-entangled with mine and produces same hash? /jk
<Pyrotherm>
ok, so in more of a deduplicated-SAN type system that uses block storage...
<Pyrotherm>
I am starting to see how this fits
<M-hash>
achin: sha256 and sha512 are the same algorithm internally; the sha2 family. so, yes, attacks on one will almost certainly be relevant to the other.
<edrex>
Pyrotherm: yup
<vijayee_>
whyrusleeping: I was following along on the "roll your own" example and I keep getting an error at the IDB58Decode step for the client connect "multihash too short. must be > 3 bytes"
<vijayee_>
is the string output for the host step the right format to pass in?
wasabiiii1 has joined #ipfs
<Pyrotherm>
if I wanted to help this project in any way, what would be a good starting point?
<giodamelio>
I was wondering the same thing
wasabiiii has quit [Ping timeout: 250 seconds]
<Pyrotherm>
I'll be spinning up some VMs with decent bandwitdh tonight for fun, and to add some capacity to the network
shea256_ has quit [Read error: Connection reset by peer]
<achin>
M-hash: it seems fitting that someone named M-hash can answer a question about hash functions :)
j0hnsmith has quit [Quit: j0hnsmith]
<whyrusleeping>
vijayee_: oooh, yeah. its printing out <peer abcde> right?
<whyrusleeping>
you need to pass in the whole hash...
<M-hash>
heh. it only took some of my closest friends a calendar year to realize maybe my moniker choice represents something of a religious obsession. ;) it's awful in this channel though. so many pings haha.
<whyrusleeping>
M-hash: lol, hash hash hash!
<vijayee_>
yeah that is what it prints
<vijayee_>
so if I pass it the full hash do I even need to encode it?
simonv3 has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
vijayee_: yeah, i need the change the print to be id.B58String() or something
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<achin>
we all should shorten multihash to M-hash :D
shea256 has joined #ipfs
pfraze has quit [Remote host closed the connection]
<edrex>
pjz: If you're interested, I've thought about camlistore interop a lot.
<whyrusleeping>
jbenet: anacrolix's utp lib gets a little over 100Mbit on the localhost
<vijayee_>
whyrusleeping: thanks, it works now. I changed it to output node.Identity.Pretty()
<whyrusleeping>
i think i can probably improve that a bit, but i think it might be good enough to move forward with
<whyrusleeping>
vijayee_: awesome :)
dignifiedquire has quit [Quit: dignifiedquire]
<jbenet>
whyrusleeping: so the report is incorrect?
<jbenet>
whyrusleeping have a script to try it?
<edrex>
pjz: One thing I think would really help in that regard would be a human-readable, "plain old filesystem" storage/export format
<edrex>
which keeps all the richness of the data model (bidirectional) while exposing a static version of it in a way that's useful to a human browsing it.
<M-hash>
whyrusleeping: evul D:<<
<edrex>
(not the low-level data model, the high level one)
<vijayee_>
jbenet: I haven't touched it in a while, I could kick back into gear if its needed. I wanted to rewrite the testing portion of it to be independent of the go test framework
<jbenet>
yeah idk. it's not great.
<jbenet>
vijayee_ it would be really helpful to have this soon
<whyrusleeping>
its not great, but its pure go, theres room for improvement, it works, and it would help our resource consumption a lot
<vijayee_>
I'm not sure what the test cases should be for the different chapters
<jbenet>
whyrusleeping: we're not going to be go-gettable in the long run.
<jbenet>
i think we should just wrap libutp
dignifiedquire has joined #ipfs
<whyrusleeping>
yeah, we could do that to
<jbenet>
we have prebuilt bins, what's the major issue with that?
kerozene has joined #ipfs
<whyrusleeping>
nothing against doing that, but we have a pure go utp lib, and we dont have a utp lib wrapper
<jbenet>
yeah it's not even net pkg friendly. i'll write these-- here try out the udt wrapper for now
<jbenet>
and see how it goes
<whyrusleeping>
your udtwrapper?
leeola has joined #ipfs
<jbenet>
giodamelio: pls note IPNS is still wonky. i suggest not updating that page until tomorrow and th HN traffic passes.
<jbenet>
whyrusleeping yeah:
patcon has joined #ipfs
<giodamelio>
jbenet: ok, ill hold off
ygrek has joined #ipfs
<jbenet>
i'm proud that our gateways handle the HN blasts so well. ipfs.io is so snappy.
<jbenet>
gj lgierth whyrusleeping and everyone else making ipfs
<achin>
if i wanted to help host the NH site, i could easily pin it. but i might not want to pin it forever. id' be neat to have a way to say "pin this content for the next 72 hours only"
apophis has quit [Quit: This computer has gone to sleep]
<jbenet>
archin: indeed, i think there's a ton to do with pins, could have a whole set of interesting pin programs, that check other conditions before pinning or unpinning
<jbenet>
(thankfully, it's all shell scriptable ;) )
<achin>
yeah, there are a lot of neat usecases to support in the future
<jbenet>
whyrusleeping: another option is to add both anacrolix and udt, open both ports, and see how they both go.
notduncansmith has quit [Read error: Connection reset by peer]
dokee has joined #ipfs
<dokee>
hello
<achin>
hi
<dokee>
I post a message in google group but maybe the discussion will be easier here. It s about cycling hyperlinks. For example I m wondering how we can set this very small static website with ipfs : http://bucketaatest.s3-website-us-east-1.amazonaws.com/static1.html
<vijayee_>
jbenet: for testing each command should I have the user pipe in the output from their local ipfs to the tour then verify their current chapter?
<vijayee_>
I'm not sure I know what you are imagining from the current content
tsenior`` has joined #ipfs
<jbenet>
Dokee: options: (1) relative links (href="./static2.html"), (2) if truly cannot do relative (i suspect you can), use IPNS
shea256 has quit [Remote host closed the connection]
<dokee>
jbenet: I can relative ... but wont the link change when the static2.html file will be hashed ?
<jbenet>
vijayee_ not sure. whatever is convenient for you. ideally user could just run "<cmd> test" and it should do all the checking
<jbenet>
Dokee: add a directory
<jbenet>
gimme a sec i'll show you
<dokee>
jbenet: thnkas
<fazo>
Dokee: if you "ipfs add -r <directory>" then you will have a hash for every file and directory. If the directory hash is for example "somehash", and the directory contains "index.html" and "style.css" you can /ipfs/somehash/index.html and /ipfs/somehash/style.css
j0hnsmith has joined #ipfs
<dokee>
fazo: ok then the answer I was looking for then means the final file name doesn t change, we still use the full name not the hash
<fazo>
Dokee: so relative paths will work unless you try to access the parent directory of the directory you added
<fazo>
Dokee: yes you can use the original name, but you need to use the hash for the first folder in the path
<fazo>
Dokee: so after /ipfs/ you have an hash, but then you can have a normal relative path
<dokee>
fazo: ok. if the directory hash in relation with its content ? ie if the content changes does the directory hash change ?
<dokee>
is*
<fazo>
Dokee: yes
<fazo>
Dokee: the same file will always give the same hash and different files will always give different hashes
<dokee>
fazo: hummmm... I ll wait for jbenet and his example then ... still not get it
<fazo>
Dokee: so if we both add the bootstrap.min.css from getboostrap.com, if the file is identical the hash is the same, so to ipfs, it's the same file
<dokee>
giodamelio: thanks but not really. The problem is not really the link but the cycling link. There you don t have a cycle in your links. your css is not referencing anything
<dokee>
@jbenet: ok saw your example. Understood you can use the full name if you re using relativ path ! now I suppose then ipfs can still distribute it by using the directory hash... great ! So next question: it s not in the same directory or same server.
<dokee>
@jbenet: thanks for taking care creating this example by the way :)
<fazo>
Dokee: it doesn't matter which server has it, as long as at least 1 server who has the file is online, the file will arrive
qqueue has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
Dokee: the point is you can always create a "directory" that points to the two other things, and put them on "the same place"
<fazo>
Dokee: that's the point of IPFS: you don't care where the file is
<dokee>
even if I don t own the 2 files ?
<fazo>
Dokee: to access a file you just need the hash
<jbenet>
so i can make a file that always links to: <a href="./something_else.html">surprise!</a> or even ../something_else.html, and others can "add it to their own dirs".
<fazo>
Dokee: you can create an IPFS folder with files you don't own right now: a folder is just a list of links to its files or other folders
<fazo>
Dokee: your machine will download the files you need from some other machine
<lgierth>
jbenet: do we have yet another HN post?
<dokee>
no copyright issue copying the file ?
<lgierth>
just asking cause that was plural up there ^ :)
mildred has quit [Ping timeout: 264 seconds]
<giodamelio>
lgierth: ya, that was me
<giodamelio>
sorry if it's causing any trouble
<fazo>
giodamelio: looks like my node has trouble downloading your blog :(
apophis has joined #ipfs
<fazo>
giodamelio: can only see it from gateway.ipfs.io
<Bat`O>
is there a reason for ipfs pin add to hang until everything is local and to add the pin only at the end ?
<giodamelio>
fazo: really? I can see it fine from my local gateway. How would I go about troubleshooting my connection to the network?
<spikebike>
Bat`O: if pin doesn't block, how can you tell if it worked?
<fazo>
giodamelio: uhm I don't know
<fazo>
giodamelio: we could see if our nodes are connected.
<jbenet>
lgierth: no it's giodamelio on HN today
<vijayee_>
jbenet: should the chapter contents be the same as the help text for the corresponding command for that chapter
<Bat`O>
fazo: i'm use the http API to build an app, so not doable
<Bat`O>
using*
apophis has quit [Client Quit]
<fazo>
Bat`O: oh, makes sense
vijayee_ has joined #ipfs
<fazo>
is there a way to add a file or directory but get it wrapped in a directory with a custom name?
mildred has joined #ipfs
<fazo>
for example "ipfs add -r dir/ --name hello" would create a directory with a directory named "hello" with the content of "dir/"
<fazo>
it could be useful to then publish the hash
<fazo>
oh I'm an idiot,
<fazo>
it's in ipfs name publish already
<Bat`O>
fazo: ipfs add -w
gordonb has joined #ipfs
<fazo>
Bat`O: uhm, but what about the directory name. It takes it from the object i add?
<Bat`O>
no idea :)
<fazo>
I have another question then
<fazo>
is there a way to manually build a directory object?
<fazo>
like I give it the hashes and the name to associate to the hashes
<fazo>
and it creates the directory and gives me the hash
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<j0hnsmith>
I’m going to Pycon UK tomorrow, I’m going to put myself up for a talk about IPFS. I can cover the basics (I only found out about IPFS at container.camp on Friday) but I’m looking for any python specific topics, any ideas?
<Bat`O>
fazo: maybe try ipfs object {new, patch}
<fazo>
j0hnsmith: I think there's an api wrapper written in python
captain_morgan has quit [Ping timeout: 246 seconds]
<j0hnsmith>
fazo: yes, I saw that
<achin>
ion: i saw. that is some good rsyncfoo
qqueue has quit [Ping timeout: 260 seconds]
screensaver has quit [Ping timeout: 246 seconds]
qqueue has joined #ipfs
Spinnaker has quit [Ping timeout: 256 seconds]
apophis has joined #ipfs
<fazo>
Why can't multiple ipns names for a node be implemented by representing the list of names as a folder?
<fazo>
Every node would have 1 IPNS hash which points to a folder with many other files/folders, each one is a different ipns "name" originating from that ipns node
Spinnaker has joined #ipfs
<fazo>
When a name is updated to a different hash, just recreate the folder and set it as the new ipns value
<ion>
jbenet, whyrusleeping: It would be cool to have an “ipfs diff” which recursively gets/accesses only the blocks whose checksums differ to pass to “diff”. And it will be even more efficient when files get chunked by a rolling hash.
tsenior`` has quit [Ping timeout: 240 seconds]
<whyrusleeping>
ion: what do you mean?
<whyrusleeping>
what would the usecase be?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
j0hnsmith: if you have an outline or some content written down for you talk, i can definitely take a look and give some feedback
<ion>
whyrusleeping: For instance, i just looked at diff -u <(ipfs ls /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ) <(ipfs ls /ipfs/Qmew9eTZSvPPJVZcv9WLdH7VJ9nKe4QVxxsi4iSLXNhkrq) to see what changed between the RFC dumps and then looked at the difference between /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ/rfc-index.txt and /ipfs/Qmew9eTZSvPPJVZcv9WLdH7VJ9nKe4QVxxsi4iSLXNhkrq/rfc-index.txt. It would
<ion>
be nice to have “ipfs diff /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ /ipfs/Qmew9eTZSvPPJVZcv9WLdH7VJ9nKe4QVxxsi4iSLXNhkrq” which determines which files in the directories have changed, and why not also which blocks within files have changed, to provide a diff for human consumption.
<whyrusleeping>
ooooh, i totally wrote code for that
<whyrusleeping>
its not exposed through the API at all, but its there
<ion>
nice
G-Ray has joined #ipfs
<G-Ray>
Hi
<j0hnsmith>
whyrusleeping: I don’t have anything yet, I was thinking I’ll just do a demo of the basic commands etc etc
<G-Ray>
Does anybody is working on implementing i2p routing in IPFS ?
<whyrusleeping>
G-Ray: afaik, nobody is working on i2p stuff, there has been some work on tor though
<whyrusleeping>
j0hnsmith: okay, well let me know if you want any feedback, i'd love to help make your talk awesome
<j0hnsmith>
whyrusleeping: thx
<G-Ray>
IPFS + i2p would be amazing right ?
<G-Ray>
as long as speed is correct
<whyrusleeping>
G-Ray: it would be pretty awesome to have, yeah
j0hnsmith has quit [Quit: j0hnsmith]
<G-Ray>
whyrusleeping: Do you know how hard it would be ?
<whyrusleeping>
G-Ray: right now, i'm not sure. I dont think it would be an absurd amount of work, for someone who is familiar with i2p
qqueue has quit [Ping timeout: 252 seconds]
<whyrusleeping>
and depending on if you have to implement it manually or if you can use an existing library
<G-Ray>
whyrusleeping: I may take a look at it soon. It would be a great project to do :)
<whyrusleeping>
as far as changes to ipfs, you would have to implement an i2p net interface
<whyrusleeping>
in the multiaddr-net package
<whyrusleeping>
and build in support for i2p addresses in multiaddr
wasabiiii1 has joined #ipfs
qqueue has joined #ipfs
wasabiiii has quit [Ping timeout: 240 seconds]
jhulten has quit [Ping timeout: 240 seconds]
dokee has quit [Quit: Page closed]
<G-Ray>
whyrusleeping: Thanks ! I can't imagine how great it would be if ipfs+i2p was widely used
<whyrusleeping>
G-Ray: i'm all about ipfs being used all over the place :)
<G-Ray>
whyrusleeping: Are you and jbenet working on IPFS at full time ?
<whyrusleeping>
G-Ray: we are, yes
<G-Ray>
whyrusleeping: Are you paid ?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
G-Ray: yeap, i'm an employee of Protocol Labs Inc
j0hnsmith has joined #ipfs
<achin>
is Protocol Labs Inc also known as Interplanetary Networks?
<whyrusleeping>
achin: yeah, we changed the name
<elavoie>
Hi all, I am a newbie here. I am keenly interested in using IPFS for a distributed computing platform for the Web for my PhD project. (I saw daviddias new repo with similar goals ;-))
<whyrusleeping>
we werent super sold on the earlier name
<whyrusleeping>
elavoie: welcome to the channel!
<daviddias>
elavoie: Welcome! :D
<G-Ray>
whyrusleeping: So if it's your company, from where does your found come from ?
<daviddias>
did you see webrtc-explorer or the browserCloud.js research?
<G-Ray>
funds
<elavoie>
quick question: suppose I add content on IPFS using 'ipfs add ...' while my daemon is running and I subsequently shut it down, will IPFS currently maintain a copy of the files?
<daviddias>
that's very cool btw :)
<daviddias>
elavoie: the content will be discoverable through IPFS as long as at least one node serves it
<elavoie>
as far as I understand no, right? I need to keep republishing for it to stay alive?
<daviddias>
if your node was the only one serving it, then the answer is no
<whyrusleeping>
G-Ray: from investors in the company right now
<elavoie>
I see, so atm someone else needs to explicitly pin it down for it to stay permanent
<daviddias>
but for example, if you had it shared with another person and they downloaded it through IPFS, then they will have a copy that they can serve
<fazo>
whyrusleeping: I included the license and randall's "about" html page
<fazo>
whyrusleeping: and the script I wrote to download the data
<elavoie>
it that a cached copy that would eventually be purged?
<elavoie>
is*
<achin>
yes, unless someone else pinned it
<Vyl>
Eventually, yes, as I understand it, unless one of two conditions are met: Either someone sets it to pin, or requests for it continue to sustain it. I think that's right.
<fazo>
elavoie: when you do ipfs add, the copy is never purged from your local machine
<fazo>
elavoie: because it's pinned
<elavoie>
I see
<whyrusleeping>
fazo: looking good!
<fazo>
elavoie: as long as someone is connected to a server which has the file then the file will be avaiable to that someone
<elavoie>
so in other words, the storage backend for IPFS and the economic incentives to make people hold the data are still in the works
<fazo>
elavoie: so if you want to host something and be sure it will be avaiable you need to have at least 1 machine which is on the net 24/7 and has it pinned
<fazo>
elavoie: well the storage backend already works fine
<elavoie>
understood, so I should think of IPFS as a combined GIT+BitTorrent for the moment
<fazo>
elavoie: ipfs doesn't make it so you don't need to host data anymore
<ekaron>
is there a plan for intelligent pinning in the protocol, or is that up to end users or end user's application logic? Is there a network incentive to pin?
<fazo>
elavoie: yes, the point of ipfs is that you don't care where the data is
<ekaron>
or is it just there to limit the periodic downtime of one central location?
<Vyl>
You can regard it as a 'magic store.' Stuff goes in, stuff comes out queried by hash.
<elavoie>
so that is sufficient for my first use case: A lab shares their public data on IPFS, runs computation on them and use IPFS as an efficient discovery and distribution platform and stores back the results locally.
<achin>
need more spacecats
<fazo>
elavoie: you could use ipfs to efficiently distribute the data, expecially if multiple servers end up needing the same data
<daviddias>
jbenet: you make always such awesome gifs
<jbenet>
daviddias: thanks! need to be "playables so you can pause them and advance them yourself"
<jbenet>
(does webm allow this?)
<daviddias>
sorry, never tinkered with it
<achin>
you can indeed pause webms, but to advance you generally have a scrollbar widget. so you can't easily advance by exactly 1 frame, for example
<elavoie>
Do you guys have plans to provide distributed storage where the network would proactively and automatically manage data to make sure it stays available regardless of nodes joining in or leaving? (Similar to what MaidSafe is aiming for?)
<fazo>
jbenet: how do you do such awesome gifs?
<fazo>
elavoie: I don't think such system is in the scope for IPFS, but you could use IPFS to build it
<jbenet>
fazo: keynote -> export pngs -> splice
<spikebike>
elavoie: not currently.
thomasreggi has quit [Remote host closed the connection]
qqueue has quit [Ping timeout: 256 seconds]
<jbenet>
oh also, add more cats. always.
<jbenet>
a gif cannot have too many cats
<spikebike>
jbenet: any libp2p news?
<Vyl>
There have been similar projects before, but with more specialised ambitions.
<fazo>
jbenet: I'm going to steal that
<jbenet>
fazo go for it!
<daviddias>
spikebike: I guess I have the answer for you
<elavoie>
thanks, the project is actually complementary then.
thomasreggi has joined #ipfs
thomasreggi has quit [Remote host closed the connection]
<Vyl>
Freenet is based on a similar concept, but optimised for extreme privacy and censorship-resistance, for example. Which comes with serious performance penalties due to the amount of redirection required - it's very, very slow.
<fazo>
jbenet: you mean the ipfs-cat-gifs or the automatically managed distributed data storage? because there's a slight difficulty difference
simonv3 has joined #ipfs
<spikebike>
elavoie: two hardest issues are who pays for the extra storage and how can you protect people from hosting illegal content
<spikebike>
daviddias: oh?
<daviddias>
in short, yes, we are very close as I did the first iteration (means it works) of the Distributed Record Store using the DHT (and therefore protocol muxing, stream muxing, discovery, all of that)
thomasreggi has joined #ipfs
<jbenet>
fazo: i meant the cats
<daviddias>
right now I'm factoring swarm to make it more modular and enable more transports
<Vyl>
And decentralised stores have been commonplace in the piracy community ever since the first gnutella and ed2k protocols. Which gave them something of a poor reputation, as their primary use was for copyright infringement.
<jbenet>
elavolie and fazo: but you should see filecoin.io -- which is just that
<fazo>
elavoie: such distributed storage system that you described could also be built just by using a different ipfs implementation which is a lot smarter with handling cached data
qqueue has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to private-blocks: http://git.io/vZjUl
<fazo>
jbenet: of course, but I'm going to use my own house cats
<achin>
mrow
<whyrusleeping>
fazo: thats the best kind. cats that havent been on the internet yet
<whyrusleeping>
googles cats are overused
<jbenet>
daviddias: on that-- i think our last two comments work fine.
<daviddias>
and then, the thing other than extensive testing and coverage, is creating the WebCrypto module that takes care of using the WebCryptoAPI or the Node.js crypto module, so that we can use it on the browser as well
<daviddias>
jbenet: I believe it too :) but well, I still want to check notary stuff and make sure I'm not missing anything
<jbenet>
achin: dammit, how did you guess my password to everything
<fazo>
whyrusleeping: what do you mean never been on the internet? mittens runs gentoo
<elavoie>
My second use case would be to eventually send computations where the data is on the network to avoid moving it around and have everything happen in parallel.
<G-Ray>
any progress on filecoin ?
<G-Ray>
that you can share
<jbenet>
G-Ray i was going to say yes, but :)
<achin>
jbenet: fun fact: my wifi SSID is "mrow3" (mrow and mrow2 have been retired)
<jbenet>
G-Ray sorry for the delay, but we want to make things right, and pushing out IPFS first is important.
<jbenet>
achin: nice
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atrapado has quit [Quit: Leaving]
<elavoie>
daviddias: I am planning to go spend March-April in Lisbon to work with Miguel Correia at IST. Maybe I'll see you there ;-)
ianopolous has joined #ipfs
<daviddias>
that sounds great! Who is Miguel Correia?
<fazo>
wait, is there a connection between filecoin and ipfs?
<elavoie>
well I have not stayed long enough last time to know him by his nickname ;-) I'll make sure I use it next time ;-)
<spikebike>
daviddias: "used considerably"? (from the first line of that ticket).
<daviddias>
that is really cool
<daviddias>
Well, he is Miguel Pupo Correia, and since Pupo is more unusual, everyone knows him by Miguel Pupo, that is why :)
<daviddias>
spikebike, like a lot of times? (that is what I meant)
<G-Ray>
IPFS now encrypts data transfered between nodes ?
<jbenet>
G-Ray depends on what you mean
<jbenet>
we have an encrypted transport, which is not known to be secure yet, so we don't trust it yet. (we will before any 1.0)
<spikebike>
daviddias: usually with crypto it's more along the lines of use it once and no more, thus the use of noonces.
<jbenet>
and you can encrypt the data too, (this will be native eventually. it's in our roadmap)
<G-Ray>
yes, I mean encrypted transport
<fazo>
do you guys have any useful resources to learn golang? I have experience with python, js/node, java, a little c and a little haskell
<elavoie>
I see ;-). I still have a few things to take care of, but I eventually want to take a closer look at the node implementation and see what is missing to have it be used in a web browser. I could totally use it and test drive it, and possibly contribute to the implementation. We'll see.
<fazo>
I figured I'd learn go since a lot of people are using it
<daviddias>
elavoie: what is the subject of your PhD?
<daviddias>
Have you spoken with Luís Veiga too? (he was my MSc thesis prof)
<elavoie>
"Pando: Scalable Dynamic Array-based Computing on the Web"
<elavoie>
and the demo where you ran a distributed computation on an array of numbers that were randomly sent to computation nodes
<elavoie>
you totally beat me to it ;-)
<daviddias>
One of the major problems I found was not even on building the parallel platform, but yet, how to properly load thousands of browser nodes
<daviddias>
I've some notes where I want to take webrtc-explorer next (specially now with all the knowledge adquired from libp2p) and make it run in Node.js and the Browser, so I can get something like a PlanetLab of hybrid nodes for proper testing
<elavoie>
you mean, your problem was finding enough work for them to be busy ;-)
<whyrusleeping>
jbenet: those udt wrappers are ungodly slow
<whyrusleeping>
either that or it hangs
<elavoie>
yes that is exactly it, that is awesome
<whyrusleeping>
ah wait, maybe user error
wopi has quit [Read error: Connection reset by peer]
<daviddias>
there is a small set of problems that has to be solved for parallel computing
<whyrusleeping>
okay, good. its faster
<whyrusleeping>
was worried for a sec
<daviddias>
like distribution of data, orchestration, messaging, coordination and aggregation of data
wopi has joined #ipfs
qqueue has joined #ipfs
<daviddias>
that can be succinctly described and tackled individually, but the majority of the solutions just mix them all up
<daviddias>
very much like the network stack
<daviddias>
and so, very much like libp2p, that is where my idea is flowing on the browserCloudjs R.D is going
ygrek has quit [Remote host closed the connection]
<daviddias>
if you are going to work on those problems during a PhD, I will be very interested to follow that development :)
<elavoie>
I see. I'll read your master thesis, properly cite it in my area proposal exam written document and oral presentation and give you some feedback ;-)
<jbenet>
whyrusleeping maybe do a benchmark of all? tcp, raw udp, anacrolix/utp, udtwrapper
<daviddias>
elavoie: that is awesome :) thank you!
<spikebike>
daviddias: ah, that's not the normal noonce issue, sounds like the openssl folks think it will be fixed
hellertime has quit [Ping timeout: 250 seconds]
ygrek has joined #ipfs
nessence has quit [Remote host closed the connection]
<spikebike>
daviddias: not sure IPFs should really worry about IPFS nodes getting attacked from the same node via complex timing attacks
<whyrusleeping>
jbenet: i'm just gonna run with this for now, swapping it for another udp protocol will be really easy later if we need
<daviddias>
spikebike: it is a security best practice to ensure key rotation and freshness
<jbenet>
elavolie daviddias: just catching up on your convo, but, getting a worldwide planetlab on consumer devices is something i really really want
<jbenet>
whyrusleeping: prob takes a few min to adapt your utp bench script to do the others, good to have, good to know.
<elavoie>
jbenet: me too ;-)
<whyrusleeping>
jbenet: already adapted it to the udt stuff, 64MB/s
<jbenet>
elavoie: maybe you should join daviddias and i on a call one of these weeks to discuss some plans we're hatching for that
<whyrusleeping>
== roughly 512mbit
<jbenet>
yay
<elavoie>
jbenet: sure
<jbenet>
thats loopback-- who knows how all these fail when you add latencies and bw limitations
<jbenet>
transports are surprisingly annoying to get righ
<jbenet>
tt
<daviddias>
jbenet: I so wanted that :) I started by putting browsers on Docker containers, but I had so many GPU erros that I had no idea what I was looking at and had to postpone. Fortunately, and thanks to Jessie, I got them to work very close to the delivery, but still haven't had the chance to make some Terraform magic + piri-piri (my p2p browser orchestration
<daviddias>
tool)
<daviddias>
s/wanted/want/ :)
<whyrusleeping>
tcp is 1.6GB/s, so roughly 12.8gbit on the localhost ;)
<jbenet>
but hopefully not "will want"
<jbenet>
wow.
<daviddias>
elavoie: that is the first part, the project proposal. let me point to the actual thesis
<whyrusleeping>
tcp is pretty good at being tcp, lol
<jbenet>
UDT is supposed to outperform TCP on fast backhaul.
<elavoie>
daviddias: great thanks
nessence has quit [Ping timeout: 252 seconds]
<daviddias>
:D
<whyrusleeping>
jbenet: we are using bindings to a c++ lib
<jbenet>
yeah>
<whyrusleeping>
theres no way in hell we are going to touch a native tcp impl
<whyrusleeping>
not even remotely a possibility
<whyrusleeping>
every call into C incurs a noticeable delay
<elavoie>
Wait, you submitted in May? I was in Lisbon for almost three weeks in March!
<jbenet>
hmm not so sure about that. im sure QUIC has hacks around this-- there's ways to get NIC notifs directly i think
ygrek has quit [Ping timeout: 264 seconds]
<whyrusleeping>
jbenet: nope
<daviddias>
elavoie: not sure if we crossed each other, sad that we didn't met before. I typically was at the Tagus Park Campus and not the Alameda one (that is where Telecom Eng is taught)
<elavoie>
oh that is why
<jbenet>
whyrusleeping: yes, there are. there are userland implementations of network stacks that outperform kernel stacks.
<whyrusleeping>
thats not what i was saying at all
<elavoie>
but your advisor office is at Alameda right?
<whyrusleeping>
i'm saying making cgo calls will never outperform a native impl
<whyrusleeping>
a good implementation in go can be just as good as an implementation in C
<daviddias>
elavoie: he has one in both (I had one in both too, but Tagus was closer to my home back then)
<whyrusleeping>
but calling over the boundaries is expensive (this is a limitation of go)
wasabiiii has joined #ipfs
<daviddias>
elavoie: I am really excited to know you are going to work on this stuff :) I hope I can help by shipping libp2p that you can use
wasabiiii1 has quit [Ping timeout: 256 seconds]
<jbenet>
whyrusleeping: ahhhhh i see. interesting, that's a bummer. why?
<whyrusleeping>
theres a lot of stack accounting that it has to do
<whyrusleeping>
to ensure that things remain safe
<jbenet>
mmmmm.
<jbenet>
by the way, i've been thinking of adding "external" transports to ipfs
<elavoie>
daviddias: That would be great, my background is in compilers and vm implementations and not distributed systems. The more I can use libraries developed by actual experts the better I will be ;-). (I am keen on learning though and contributing though!)
<jbenet>
like: "ipfs swarm transport add <multiaddr> <unix-domain-socket>" or something
<fazo>
jbenet: you mean a way to let instances talk with a custom protocol?
j0hnsmith has quit [Quit: j0hnsmith]
<jbenet>
what i mean is, have a different program run the protocol you want to have a transport on, and speak over some pipe to it
<jbenet>
so we can use some of the transports available in other programs natively.
<jbenet>
it's annoying ot have to do that, but woudl make it way easier to add support for a bunch of other things, like proxies, bluetooth, audio, etc.
<jbenet>
fazo: no i mean a transport like udt/utp/ble/etc.
<whyrusleeping>
jbenet: lol, but we dont actually want tcp
<elavoie>
daviddias: I putting the final revisions to my area proposal exam. I would also like your feedback on it so you tell me what to expect in terms of technical complexity for the Distributed System part and whether my time estimates for the Pando project are reasonable!
pfraze has quit [Remote host closed the connection]
<jbenet>
oh i know, but a year ago i was thinking of that as a hack.
<daviddias>
elavoie: sounds good to me :)
<elavoie>
daviddias: awesome, the project is getting more and more exciting!
<daviddias>
Make sure to account for the testing infrastructure
bbfc has joined #ipfs
<elavoie>
ok!
<daviddias>
I didn't realise it was going to be so big and time consuming and then got into huge stress by running against the clock to make something good enough to test
gordonb has quit [Quit: gordonb]
<daviddias>
current browser testing frameworks don't account for P2P applications
<elavoie>
right
<whyrusleeping>
jbenet: oh! yeah, lol
<daviddias>
but yeah, send it to me when it is ready, I'll be happy to review it :)
<elavoie>
hopefully by the end of the day tomorrow
<daviddias>
sgtm :)
<elavoie>
ok, I'll get that done and will show back up later!
<jbenet>
hey, who here wants to help us answer lots of questions? i'll voice people. i'll do it for daviddias, davidar, cryptix, mappum, anyone else want?
<whyrusleeping>
daviddias: are you registered with freenode?
<daviddias>
nice, going to have my '+' again that IRC cloud continues to loose it for me :D
captain_morgan has joined #ipfs
<achin>
are answers from people with hat supposed to be more trustworthy?
<daviddias>
yes
<whyrusleeping>
daviddias: hrm... chanserv should be autovoicing you
gordonb has joined #ipfs
<achin>
i'm happy to help answer questions, but i don't need +v for that
<jbenet>
voxelot that's great, would love to see it. we need to get ipns fixed up for you.
<voxelot>
sure
<voxelot>
ohh that looks like it
<daviddias>
<jbenet>
whyrusleeping: how do you feel about proving me wrong about all the holdup with records and adding a trivial int to the dht records for now, to get a bit of a boost on ipns?
<jbenet>
oh sure
<jbenet>
i havent kept up
<whyrusleeping>
jbenet: lol, where you wanna put that on my priority list?
<whyrusleeping>
top of the list right now is the udp stuff
<whyrusleeping>
behind that and finishing up PRs, i was going to put the chunking algos in the unixfs roots
<fazo>
whyrusleeping: may I ask what problem you're having with udp?
wopi has quit [Read error: Connection reset by peer]
<whyrusleeping>
fazo: no problems, just have to integrate it
notduncansmith has joined #ipfs
<fazo>
whyrusleeping: so it uses tcp now?
notduncansmith has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
<whyrusleeping>
yeap, ipfs uses tcp
<whyrusleeping>
and we are catching fire because of it
<whyrusleeping>
apparently tcp uses up a lot of file descriptors
<fazo>
whyrusleeping: oh that problem
<whyrusleeping>
yeah, with udp we will be able to have just one file descriptor
<whyrusleeping>
which will be sweet
<fazo>
whyrusleeping: but don't you need to write a new part of the protocol if you switch to udp?
<whyrusleeping>
fazo: no, we're going to be using a reliable ordered udp protocol
<whyrusleeping>
(UDT)
<whyrusleeping>
so it should work just the same
<fazo>
whyrusleeping: oh so of course it's not just plain udp
<fazo>
whyrusleeping: that would be crazy
<whyrusleeping>
nope, that would be hard
<rendar>
whyrusleeping: what is the hard part there?
kerozene has quit [Max SendQ exceeded]
apophis has quit [Quit: This computer has gone to sleep]
<whyrusleeping>
rendar: not much difficulty to it, just work
<whyrusleeping>
gotta touch a lot of things
<whyrusleeping>
test a bunch of other things
<whyrusleeping>
and make sure it all works as expected
<rendar>
yeah, i agree
<rendar>
who bakes ipn Inc. financially?
qqueue has quit [Ping timeout: 250 seconds]
<bbfc>
Hello, anyone know how can i debug/diagnose 'context deadline exceeded' errors from gateway.ipfs.io ? I'm the only node with that IPFS Hash, maybe it's firewall/slow upload net ?
<whyrusleeping>
rendar: thats a question for jbenet
<achin>
what's the hash bbfc ?
<whyrusleeping>
i just write code :)
<rendar>
whyrusleeping: lol, ok :)
<whyrusleeping>
we did change the name from ipn to protocol labs though
<fazo>
bbfc: saw that a few times, it went away after I anxiously refreshed the page
kerozene has joined #ipfs
<rendar>
whyrusleeping: cool
<whyrusleeping>
bbfc: thats either your node unable to connect to the network for some reason, or just that discovery process being slow
<whyrusleeping>
which is known to happen
qqueue has joined #ipfs
<bbfc>
its /ipfs/QmQnnvQ5SN2MSfNvAZoohRRyiejiCJpvQUC1YsFnxPzzwW/index.html
<achin>
according to my node, no one has a copy of QmQnnvQ5SN2MSfNvAZoohRRyiejiCJpvQUC1YsFnxPzzwW
<fazo>
achin: same
<fazo>
maybe bbfc is not correcly connected to the network
<achin>
bbfc: what does "ipfs swarm peers" say on your node?
<bbfc>
archin: 71 nodes, first one is /ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
<fazo>
bbfc: you can try ipfs swarm peers | grep QmdATh2XUvfy5jys78RoeFSKLyBjeMDv9cjby2TRwjCd3X too see if I'm there
<whyrusleeping>
(thats the QmaCpd node)
<whyrusleeping>
spikebike: mind filing an issue for that on the ipfs/website repo?
<fazo>
bbfc: my node just downloaded your page :)
<whyrusleeping>
we're still getting the process of mirroring our page on ipfs right
<fazo>
whyrusleeping: yeah my node got that hash too now
<edrex>
Terminology question: I'm using "root" to refer to the base node of an IPFS traversal path like `/ipfs/<root node>/<some path>`, but it's an overloaded term. Better term?
<achin>
i like how IPN has machines named afer planets :)
<whyrusleeping>
edrex: uhm... path-root is normally what i say
<whyrusleeping>
or hashroot
<edrex>
thx whyrusleeping, that's better
<whyrusleeping>
its definitely still a root of some kind
<fazo>
achin: mine and my friend's machines are named after Don't starve characters
<fazo>
achin: actually it's a mix of don't starve and life is strange characters now because don't starve has "Maxwell" and "Maxine" was too good to be thrown away
<bbfc>
fazo: no, i don't have you.
<bbfc>
mars
<fazo>
bbfc: too bad, my node managed to download your data though
<achin>
fazo: yeah, but you didn't name your company "Interplanetary Networks" :P
wasabiiii has quit [Ping timeout: 265 seconds]
<fazo>
achin: I don't have a company :(
Guest62677 is now known as ehd
wasabiiii has joined #ipfs
ehd is now known as Guest79487
<bbfc>
fazo/mars: thanxs! Since you guys added my HASH, it's working now on the gateway. Any idea why ? Because i'm a new node ? Can i get a 'higher' rank on nodelist? I'm new to IPFS, just trying to understand
<fazo>
bbfc: I don't think there is a rank
<fazo>
bbfc: I think it's just that your node has a few connection problems (maybe slow bandwidth, aggressive firewall, strong nat?"
domanic has joined #ipfs
<bbfc>
yeah, strong nat, slow upload speed [third world countries]
<achin>
how long ago did you add that content?
<fazo>
bbfc: I'm sorry. I have 7 megabits download and 0.5 megabits upload.
<bbfc>
achin: 1 hour
<whyrusleeping>
this slowness and bandwidth usage is going to be much improved
<bbfc>
achin: Strange, i have the same, ~5mb download, ~0.3 upload
<fazo>
whyrusleeping: it's already working well for small files
<whyrusleeping>
right now bitswap has a perfectly naive implementation, which makes development easy, but wastes bandwidth and results in redundantly sent block
<whyrusleeping>
s
<bbfc>
fazo: I will fix the nat/upnp shit on the router
<fazo>
whyrusleeping: the problem is huge files because the downloads are not distributed
<bbfc>
fazo: thxs very much
<fazo>
bbfc: I think go-ipfs supports upnp
<whyrusleeping>
fazo: yeah, you end up getting lots of duplicate blocks because everyone is willing to send everyone everything
<fazo>
bbfc: yep, it's using upnp fine on my system
<jbenet>
whyrusleeping: your priorities sounds good. i think the udp stuff is pretty high prio.
<fazo>
whyrusleeping: maybe you could publish that you need the file and get a block list as an answer
<fazo>
whyrusleeping: then you can ask for the single blocks?
<fazo>
whyrusleeping: maybe not even a block list, just a block count (if the blocks have all the same size)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
maybe I should look at what a block is exactly before suggesting
<achin>
always a good idea!
Guest79487 is now known as ehd
rendar has quit []
<ehd>
cryptix: yo, i just saw you don't need a paid dev account anymore for deploying to your iOS device: https://developer.apple.com/xcode/
brandonemrich has quit []
<jbenet>
rendar: i'm happy to answer questions about protocol labs, but i really want to keep that sort of discussion mostly out of this channel, because i always hated when $bigcorps run psuedo-open open source projects, at which you pretty much have to be part of the company to get a say. i haaaaated that. so you don't hear us talk a ton about protocol labs here.
<jbenet>
this is the #ipfs community, which overlaps, but is not protocol labs.
tsenior`` has quit [Ping timeout: 240 seconds]
qqueue has quit [Ping timeout: 264 seconds]
captain_morgan has joined #ipfs
<edrex>
giodamelio: how are you finding hugo?
domanic has quit [Ping timeout: 240 seconds]
<jbenet>
rendar: in short, we're funded by a number of investors (inc YC), who care about various things the future of the web, to CDNs, to devops, to bitcoin, to blockchains, to crypto tech. filecoin.io is going to be a source of revenue for us, to ensure we can continue upgrading the network stack.
qqueue has joined #ipfs
<edrex>
sorry, OT
<jbenet>
edrex: no worries. this is a pretty organic channel :)
jcorgan has left #ipfs [#ipfs]
<giodamelio>
edrex: Pretty good, I just started with it yesterday, but I have been able to do pretty much everything I want with it. It's put togather pretty well and I think it has all the features I will need. Plus its under active devolpment and the dev are super nice.