<_jkh_>
whyrusleeping: hey, if you’re around: A question came up today in our own storage forum regarding IPFS and the UIs we should prioritize for it
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<_jkh_>
whyrusleeping: namely, the question of “peer groups”; what is the current thinking around being able to easily manage more limited groups of peers for more limited publishing of content into IPFS?
<_jkh_>
whyrusleeping: obviously “public” or “only on my machine” aren’t the sorts of boolean options your average businesscritter is going to want
<M-hash>
^alsowant
<_jkh_>
well, if the ipfs folks are not working on that one or plan to work on it soon, that sounds like one I can add to our own todo list and contribute it back
magneto1 has quit [Ping timeout: 268 seconds]
rongladney has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
_jkh_: yeah, our current plan for that is encrypted content with shared keys
<whyrusleeping>
to form the 'groups'
<whyrusleeping>
the other option, which is slightly different is to have 'private networks' where your node is only connected to other nodes that you want to be connected to
fazo has joined #ipfs
<fazo>
Failed to load resource: the server responded with a status of 400 (Bad Request)
<fazo>
for some reason the gateway doesn't like it (parameter?)
<whyrusleeping>
fazo: yeah, its been fixed since
<fazo>
oh no
<whyrusleeping>
but we havent shipped a new webui
<fazo>
I just realized the hash is missing
<fazo>
of course it won't work
<fazo>
it's for my little app
<fazo>
how do I tell it to load the fonts on a different url
<whyrusleeping>
use an absolute path?
<rschulman>
evening folks
<fazo>
looks like I never told it where the fonts are
<whyrusleeping>
rschulman: good evenin!
<rschulman>
whyrusleeping: How are things?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
uh, it works :)
<whyrusleeping>
rschulman: eh, very unproductive weekend
<fazo>
whyrusleeping: I'm going to try implementing the service discovery hack we talked about in my ipfs bookmarks app (to find other people's bookmarks if they share them)
<fazo>
just to see how it plays out
<rschulman>
whyrusleeping: My son's had a cold the past few days, but we still managed to get a lot of stuff unpacked.
<whyrusleeping>
fazo: sounds good!
<rschulman>
Also went to ikea, and came out still married, so that's something.
<whyrusleeping>
rschulman: oooh, thats no fun
<fazo>
whyrusleeping: yeah I just updated the ux a little (it was horrible) /ipfs/QmfX93JbzAzVZ6DKED1LyxzXeJ6Q1svZcTCnWJS82eryLd
<whyrusleeping>
last time i went to ikea i left with a bunch of artistic twigs and a different girlfriend
<whyrusleeping>
still have the twigs
<whyrusleeping>
fazo: mmm, bootstrappy
<fazo>
whyrusleeping: yeah, I take shortcuts because I'm not a designer I guess :(
<fazo>
as long as it works and users don't vomit it's fine by me
<rschulman>
fazo: Does your app store the bookmarks in ipns local storage or something?
<fazo>
rschulman: it uses the browser's local storage, but I plan to implement a way to store them to ipfs from the app itself
<rschulman>
ah, cool
<fazo>
rschulman: if you delete all bookmarks, all that's left in the local storage is a key set to "[]"
<fazo>
rschulman: yes, and if I make it, you'll be able to see other people's bookmarks if they share choose to share them :)
<rschulman>
right, that would be really cool
<rschulman>
course for that you'd need some kind of pub/sub again
<rschulman>
always comes back to pub/sub
<fazo>
the master plan is to eventually build a fully distributed reddit clone where anyone can host a "subreddit" and manage it, and authentication is done via ipfs keys. Too bad it requires a js implementation to work well
<fazo>
rschulman: well, the idea is to look for people that have the "i-got-bookmarks" file, store their ID locally, then resolve their IPNS and if the resulting hash matches an object template, it's their bookmarks :)
luca has joined #ipfs
kerozene has quit [Max SendQ exceeded]
<rschulman>
fazo: Right, so you're just planning to watch the DHT advertisements?
<fazo>
rschulman: yep something like that
<rschulman>
cool
<fazo>
there's a command to see who has an hash in go-ipfs. I don't know how it works internally but I plan to use that
<fazo>
all go-ipfs commands are automatically exposed to the http api
<rschulman>
right
<rschulman>
so if you're using your app, you would create a local file in your IPFS stash that's just the contents "I'm using IPFSbookmarks" or whatever
<fazo>
yes
<rschulman>
and look for other people who also have that local file
<rschulman>
interesting, very clever
<fazo>
that was whyrusleeping's idea
<rschulman>
he's a clever guy
<fazo>
mine was a lot uglier
<rschulman>
haha
<rschulman>
I was struggling with this too when I was working on an app a while ago.
<fazo>
I planned to get the list of IDs from everyone I'm connected to and resolve all their ipns names
<rschulman>
I'll have to keep that in mind if I go back to it.
<zignig>
fazo , working on a similar thing in golang
<fazo>
I can't wait to build a full internet board app like reddit
<fazo>
or rework an existing one like telescope to use ipfs
<fazo>
true decentralized dynamic web apps, without having to do anything except open the link in a browser
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rschulman>
fazo: I'd really be interested in helping with that too.
<rschulman>
You think you'd need a native js implementation to make it happen, though?
<rschulman>
I was working on a possible twitter-clone, but same idea really.
<fazo>
yes, totally. One is being worked on by people smarter than me though :)
<rschulman>
lol
<fazo>
you want a full js implementation so that people without the go-ipfs daemon can use it
<fazo>
I think it would be one of the first dynamic static web apps. It's so cool it almost doesn't make sense
<rschulman>
oh yes, there is that, of course.
<fazo>
and the servers being down for traffic overload will never ever happen again
<rschulman>
who is working on it and are they doing it open source?
<rschulman>
sorry, I meant the reddit/twitter clone ting
<fazo>
oh, that is just an idea
<rschulman>
ah, gotcha
<rschulman>
you were saying that people are already working on a native js ipfs stack.
<rschulman>
I see.
<rschulman>
well, then let's build the twitter clone! :)
<fazo>
yeah we could start doing the UI and designing the way data is distributed :) but I don't think it makes a lot of sense to start now
kerozene has joined #ipfs
<fazo>
I have other things on the priority list like the little tool that uses ipfs to automagically sync folders between computers
<fazo>
and this bookmark thingy
<rschulman>
because not enough people have ipfs installed yet?
<fazo>
rschulman: yes, it would require go-ipfs installed and with the daemon running for every user
<rschulman>
yep, it would, true
<fazo>
then we would need to figure out how to store the data. We would need to keep files as small as possible (like a single post per file) then you would need to sign every user action with their key
<fazo>
I don't know if there's a way to sign stuff with the current API
<zignig>
fazo: only on ipns and you can't get the sig at the moment.
<rschulman>
and that way you lose your "identity" whenever ~/.ipfs is wiped out
<rschulman>
so there's a downside to that too
<zignig>
rschulman: backups backup backups.
<rschulman>
zignig: average users do not backup.
<fazo>
zignig: yeah of course every user would have their ipns pointing to some tree structure containing all their data of the service
<rschulman>
also, that only solves part of the problem. I use multiple computers.
<M-hash>
whyrusleeping: re peer groups earlier, i'd like to mention that while encrypted content on the roadmap is aces, it is absolutely not a full replacement for private networks
<zignig>
fazo: keychains , signing and better ipns would help with your plan.
<fazo>
we would have at least a dozen nodes backing up every single file relevant to the service to make sure data doesn't get lost
<M-hash>
i might have a situation where i'm shuttling vasty amounts of information around in a datacenter and it's both private on a level where i don't even find traffic analysis tolerable exposure, and more importantly from a business perspective i just plain need to be able to audit, control, and if necessary detonate a whole network's storage capabilities
<fazo>
M-hash: looks like you just need the nodes to be configurable in a way that you can choose with that IDs they connect or share your private data
<M-hash>
p much yeah
<fazo>
with what* not "with that"
<fazo>
yeah my personal plan for ipfs is to use it to store everything that's not relevant to a single machine
<fazo>
expecially virtual machines. It looks perfect to store shared virtual machine disks
<fazo>
and media
<fazo>
we have plex and its competitors to roll our own netflix, but the truth is that they use http or similar protocols and it sucks if you can't/won't port forward
<M-hash>
heh. plex on ipfs would be... highly acceptable.
<fazo>
few days ago I tried streaming 1080p video over my shitty 802.11g wifi and it managed to work perfectly.
<fazo>
was as simple as "ipfs add <file.mp4>" on a machine and "mpv http://localhost:8080/ipfs/<hash>" on the other
<fazo>
5 seconds or less between the commands, it started instantly and streamed without an itch. I was honestly surprised by the optimization put into it, even when I started serving big files to friends and they were accessing it fine
<fazo>
way better than anything else I've ever used as far as file sharing goes
<fazo>
expecially here in pizzaland where 7 mbit ADSL is top quality
<zignig>
as the transport get more optimization that should get even better.
<rschulman>
pizzaland?
<fazo>
also known as italy
<rschulman>
ahhh, haha
<fazo>
so sad that we've got one of the least restrictive free speech and general internet usage laws
<fazo>
and one of the worst connection quality in europe
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
amstocker has joined #ipfs
<whyrusleeping>
M-hash: that was why i stated both options
<whyrusleeping>
because they have different use cases
<M-hash>
haha, okay, carry on :D
<M-hash>
it's on my mind atm because i'm building a service on top of ipfs and so in CI i'd like to have a private node that doesn't talk to anyone else because it knows it's going to get detonated any second, and i'm not sure if that's possible yet
<whyrusleeping>
M-hash: 'ipfs bootstrap rm --all'
<whyrusleeping>
before starting the daemon
<M-hash>
can i tell it to not be chatty, though?
<whyrusleeping>
M-hash: what do you mean?
<M-hash>
i don't want to say, advertise myself as a peer for something, when i know i'm running a temp instance with an uptime of ~0seconds
<whyrusleeping>
M-hash: run that command i sent you
<whyrusleeping>
M-hash: 'ipfs bootstrap rm --all'
<M-hash>
does "rm" also imply "i won't dial the dht"?
<whyrusleeping>
that removes all peers from your bootstrap list
<whyrusleeping>
which implies that you have no way of connecting to the network
apophis has quit [Quit: This computer has gone to sleep]
captain_morgan has quit [Ping timeout: 240 seconds]
<fazo>
whyrusleeping: M-hash: that way, with a custom bootstrap list, you can have a private network
<fazo>
the problem is that if someones figures out your address he can still connect and read all the files
<M-hash>
ohhhhh. interesting
<fazo>
but if you block incoming connections
<fazo>
no one should be able to connect unless they have access to the lan which shouldn't happen and would bring problems anyway
<M-hash>
eeeeeeyah i'm gonna try to avoid invoking the network layer as a security feature on principle(tm), but good to know
<zignig>
whyrusleeping: or that shared key we were talking about before...
<fazo>
yeah it's a workaround, of course
<whyrusleeping>
zignig: right, thats option number two that i talked about above
<M-hash>
just not having a list of folks to dial does solve my "today" problem though, so, thanks :)
<dignifiedquire>
ipfs_students: also the videos on http://ipfs.io/ are a good starting point
<ipfs_students>
yes
<ipfs_students>
i have seen the videos too
<ipfs_students>
i am working on my final year project that is a distributed video streaming portal on ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfs_students>
how can we implement a search engine over ipfs
ygrek has joined #ipfs
<dignifiedquire>
ipfs_students: I think this is probably document wise the best thing floating around on github atm: https://github.com/ipfs/specs/pull/19
<ipfs_students>
can you tell me one thing, how can we add a file to ipfs local repo from a website ?
<whyrusleeping>
ipfs_students: i've thought about search a few times
notduncansmith has joined #ipfs
<whyrusleeping>
the way i would do it would be to run a decent number of indexing nodes that record all of the 'provides' notifications that come in across the network
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
and request those blocks, and categorize them
<whyrusleeping>
like checking mime types, etc
<whyrusleeping>
although, you can probably take some shortcuts and not fetch child blocks of files whose types and general content you already know
zmanian has quit [Quit: Connection closed for inactivity]
voxelot has quit [Ping timeout: 268 seconds]
sindresorhus has quit [Remote host closed the connection]
jhiesey has quit [Remote host closed the connection]
blame has quit [Remote host closed the connection]
oleavr has quit [Remote host closed the connection]
prosodyC has quit [Remote host closed the connection]
tibor has quit [Remote host closed the connection]
robmyers has quit [Remote host closed the connection]
mafintosh has quit [Remote host closed the connection]
anderspree has quit [Remote host closed the connection]
ffmad has quit [Remote host closed the connection]
sugarpuff has quit [Remote host closed the connection]
bigbluehat has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
bigbluehat has joined #ipfs
jamescarlyle has joined #ipfs
sindresorhus has joined #ipfs
oleavr has joined #ipfs
nessence has quit [Remote host closed the connection]
<sickill>
ipfs_students: re adding a file to local repo from a website: you can just execute system command ("shell out") with "ipfs add path-to-uploaded-file"
captain_morgan has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mafintosh has joined #ipfs
tibor has joined #ipfs
ffmad has joined #ipfs
prosodyC has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
ipfs_students has quit [Quit: Page closed]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has quit [Remote host closed the connection]
robmyers has joined #ipfs
blame has joined #ipfs
jhiesey has joined #ipfs
sugarpuff has joined #ipfs
zugz has quit [Ping timeout: 255 seconds]
zugz has joined #ipfs
anderspree has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has joined #ipfs
ygrek has quit [Ping timeout: 255 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has quit [Remote host closed the connection]
jamescarlyle has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
kerozene has quit [Max SendQ exceeded]
Quiark has quit [Ping timeout: 240 seconds]
kerozene has joined #ipfs
jamescarlyle has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
atomotic has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has joined #ipfs
samiswellcool has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hornfels has joined #ipfs
marmite has quit [Ping timeout: 250 seconds]
hellertime has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
dPow has quit [Quit: No Ping reply in 180 seconds.]
dPow has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has joined #ipfs
elima_ has quit [Ping timeout: 252 seconds]
patcon has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ygrek has joined #ipfs
elima_ has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ygrek_ has joined #ipfs
ygrek has quit [Ping timeout: 240 seconds]
atomotic has joined #ipfs
ygrek_ has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nicolagreco has joined #ipfs
apophis has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
doublec has quit [Ping timeout: 246 seconds]
therealplato has quit [Ping timeout: 240 seconds]
doublec has joined #ipfs
jamescarlyle has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rschulman has joined #ipfs
<lgierth>
richardlitt: absolutely, let's look at dnslink-deploy today -- i've been knocked out yesterday... how about ~4pm UTC?
<rschulman>
man the "ipfs" twitter search can be...problematic sometimes.
<lgierth>
i'm in the middle of my cleaning chores so i need a bit more
<richardlitt>
lgierth: works for me. see you in two hours.
<lgierth>
cool
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has quit [Remote host closed the connection]
brab has quit [Ping timeout: 244 seconds]
twistedline has joined #ipfs
Leer10 has quit [Read error: Connection reset by peer]
nessence has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ygrek has joined #ipfs
nicolagreco has quit [Quit: nicolagreco]
doei has joined #ipfs
apophis has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
apophis has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
CarlWeathers has quit [Remote host closed the connection]
<richardlitt>
Cool. Want to add your Todos now? Same for @dignifiedquire, @cryptix, and @davidar
* whyrusleeping
starts to write up an update
<dignifiedquire>
Where to post? just as a comment in the issue above?
<lgierth>
richardlitt: i'd say nevermind the todos, but everybody should post there updates, yes
<lgierth>
s/there/their/
<whyrusleeping>
dignifiedquire: yeap, post your update the in comments for the record, and then we take turns writing them in chat so we can discuss them
<richardlitt>
dignifiedquire: yeah.
<lgierth>
dignifiedquire: we'll have an etherpad for next week's todos if that's what you're after
<richardlitt>
lgierth: Still good to have them for the record if you have them.
<lgierth>
did i say i forgot to post? i actually forgot to write them :P
<whyrusleeping>
lgierth: were we seeing issues caused by too many docker logs piling up?
<lgierth>
yeah they needed regular cleaning up
<lgierth>
i've been meaning to wait for ansible-2.0 which can do this per-container, but i've disabled it all together for now
<jbenet>
lgierth: how's the denylist stuff going?
<richardlitt>
Cool. Anything we can do to help you have more completed tasks for the next sprint? You mentioned that you kept getting distracted last week, how was it this week?
<lgierth>
creates the doesn't create the html yet but i might get that ready to PR tonight
<lgierth>
s/creates the//
<lgierth>
so glad when that's finally done...
<lgierth>
i keep dragging these todos with me
<richardlitt>
Hmm. Is there anything blocking them? Do you want help?
<lgierth>
it's gonna get better once i can concentrate on something fun, and once others can take care of the infrastructure too (please let me know if there's any ambiguous or missing docs!)
<richardlitt>
Cool. Anyone want to go next? @whyrusleeping?
<whyrusleeping>
sure
<lgierth>
richardlitt: "the tasks" is really just the denylist and the infrastructure, the former i don't need help with, and the latter needs others to share the work -- cc jbenet, whyrusleeping :)
<whyrusleeping>
- working on a set of patches for ipns to make it work until we get ipld stuff
<whyrusleeping>
udt stuff was quite the time suck. But i'm pretty confident in it now
<whyrusleeping>
- [x] extracted logging into go-log
<whyrusleeping>
- [x] updated godeps
<whyrusleeping>
eof
<whyrusleeping>
i think
<richardlitt>
Sweet.
wopi has quit [Read error: Connection reset by peer]
* daviddias
is ready
<jbenet>
lgierth: indeed, let's make sure whyrusleeping and I deploy things this week, and we can assign issues in ipfs/infrastructure accordingly. I'm wary of picking up more random small things, I've way too many. (if other people can step in would be good)
<jbenet>
there's tradeoffs of course. a) may want to keep finding something
<daviddias>
### SPRINT UPDATE
<daviddias>
- Making progress on the Swarm revisit, right now the current version of Swarm supports multitransport, but once at a time and only if they follow the net module interface, we are changing that to make it even more cool :) - https://github.com/diasdavid/node-ipfs-swarm/issues/8
<daviddias>
- [x] Descriptions for the IPFS Node.js modules that people can contribute and understand more (avoiding the ones that are more unstable) https://github.com/ipfs/node-ipfs/issues
<whyrusleeping>
jbenet: yeah, i figure we leave it up to the user for now, if we find that it makes more sense to auto remove them occasionally, we can switch to that
<jbenet>
but also don't just want to lose random stuff
<jbenet>
yeah sgtm.
<jbenet>
whyrusleeping and on the UDT stuff, all set after the PR?
<dignifiedquire>
(back)
<richardlitt>
daviddias: where did you collect the conferences?
<daviddias>
(channel) if there is someone with time and eager to help on the Node.js IPFS stuff, I definitely know more than ever where to point you to
<daviddias>
richardlitt: on the logistics repo, let me get that link to you
<jbenet>
daviddias: make sure to ask people who are watching node-ipfs
<whyrusleeping>
jbenet: yeah, PR 13 should be good
<richardlitt>
daviddias: I took a look at a couple of the repos, but I just don't know enough to really be able to help much code-wise at this point.
<jbenet>
(maybe can have a meta issue like "Where you can help" or something that you use to send a notif to everyone interested. not sure)
<daviddias>
jbenet: good idea! everyone will get a notification
<jbenet>
richardlitt: no i mean in the node-ipfs repo so the watchers of it get a notif.
<richardlitt>
jbenet: the node-ipfs readme got an overhaul and an issue for each repo, which should help. But a new issue would be good.
<richardlitt>
jbenet: got it.
<fazo>
daviddias: I have some experience building apps and tools using node, I know a little about networking, p2p and crypto and I tried looking at your work, maybe I'll be able to help but I need to study ipfs better
<richardlitt>
jbenet: best gif. Almost as good as you being a FUNKY ROCK STAR
lithp has quit [Ping timeout: 240 seconds]
<daviddias>
the frame rate is simply perfect
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
daviddias: I saw the bitswarm implementation in node is missing and I started looking through how it's done in the go implementation
lithp has joined #ipfs
<fazo>
you guys should put that gif on the website :D
<whyrusleeping>
brb, picking up friend from airport
<richardlitt>
Cool. @daviddias all good?
<jbenet>
whyrusleeping: solid progress on udt. let's get that in and try it out. we may also want to cut 0.3.8 soon. lots of stuff already. let's finalize today roadmap to dev0.4.0
<jbenet>
richardlitt sec still catching up on feedback to whyrusleeping
<richardlitt>
jbenet: aight.
<daviddias>
fazo for bitswap to work in Node.js, we need a bit more stuff done in libp2p and also migrate the go version to be fully libp2p compatible. Nevertheless, a great help would be to document every bit of bitswap (expectations, what it does and what interfaces it uses) so it gets super fast to implement once we have both libp2p (go and node.js) done and tested,
<daviddias>
that would be a great help! :)
<dignifiedquire>
daviddias: you should distribute that gif every where :D
<dignifiedquire>
daviddias: on every readme
<jbenet>
daviddias: sounds good. let's nail down the iprs things later doay
<fazo>
daviddias: so the best thing I can do is to write docs about how bitswap should be implemented?
<daviddias>
dignifiedquire: ahah I tend to agree :D
<fazo>
it sounds really nice :) I think I'll start if I feel like I get a good enough understanding of the context
<jbenet>
daviddias: ok all sgtm.
<daviddias>
fazo: that is be the first step, even if libp2p was finished right now:)
<richardlitt>
@jbenet go?
<fazo>
daviddias: I'll have a look at how libp2p works and how bitswap is implemented in go-ipfs then document it
<jbenet>
richardlitt ok sec
<daviddias>
fazo: maybe we could talk and I can give you a bunch of context :)
<fazo>
daviddias: that would be very nice :)
<richardlitt>
daviddias fazo Maybe in the node-ipfs call later?
<daviddias>
yeah, fazo join the call :)
<jbenet>
This week I got stuck catching up with an insanely large comm backlog:
<jbenet>
- [x] lots of ipfs mail
<jbenet>
- [x] issue discussions
<jbenet>
- [x] PRs/CR
<jbenet>
- [x] discussion calls
<jbenet>
- [x] design/impl discussions with @whyrusleeping
<jbenet>
Did not expect how much of a time sink all this would be.
<jbenet>
Of the items I had listed I only got through one :(
<jbenet>
- [x] decide on crypto transport
<richardlitt>
jbenet: if you ever have logistical org cools that do not require domain specific knowledge, please let me know!
<lgierth>
nice pic :)
<richardlitt>
s/cools/issues/
<jbenet>
richardlitt: yeah, let's try some this week.
<jbenet>
thanks
<richardlitt>
jbenet:
<jbenet>
btw, lgierth: maybe account all the little infra things you have to do in sprint pm, etc. so it's clear where your time is getting sunk, and what people can most help with
<daviddias>
that is a good picture :)
<richardlitt>
Aight. My turn, if you're all set jbenet ?
<jbenet>
yep
<lgierth>
jbenet: i will! i actually forgot a few things again -- like going over dnslink-deploy with richardlitt
<lgierth>
(will edit update)
<richardlitt>
quick question; how are you guys doing line by line pastes? I'm using IRCCloud, it just wants to chunk it
<daviddias>
click 'send as message'
<daviddias>
and it does that for oyu
<richardlitt>
Summary:
<richardlitt>
I was AFK for most of the week, which I did not take into account when I planned this spring on Monday. So, I need to be better at that, and at saying before hand that I will or will not be online much. Also, I need to be careful not to overload on the sprint tasks.
<richardlitt>
I PRed a few things - go-ipfs Contribute, ipfs/support Readme. I looked at all of the IPFS issues this week and am starting to get a wider feel for the community as a whole and what we're doing, but still don't understand a large amount of the actual talk going on. This made the two reviews I did of diasdavid's modules hard, because I feel I don't have too
<richardlitt>
much to add. I started https://github.com/RichardLitt/ipfs-textbook as a place for me to slowly figure out and make a document aimed at teaching people about IPFS and relevant crypto. This is mostly for me, but hopefully more people will get involved.
<richardlitt>
I worked on a few other issues, like the baseURL issue for the website. But I didn't have time to do as much as I would like, or to overhaul the community page yet.
<richardlitt>
- [x] Write an issue about how to rework ipfs.io so it is more than a list of videos. If possible, draw or commit an alternative front page on a branch.
<richardlitt>
- [x] Create IFPS/support
<richardlitt>
- [x] Figure out how to put IPFS repositories on IPFS. Open a discussion in the community repo for this.
<richardlitt>
- [x] Close or review issues on ipfs/community
<richardlitt>
- [x] Start document about higher level abstraction education for IPFS to non-cryptos
<richardlitt>
- [ ] Work with ___ to make sure that set-dns-record tool works as expected.
<richardlitt>
- [ ] Rebuild the community page, or build a new one listing all of the repos currently in development and make sure that their internal READMEs are streamlined. Write a starlog about this once it is done, the blog needs more <3
<dignifiedquire>
jbenet: too late
<richardlitt>
Word. Well, that seemed to have worked. :(
<jbenet>
<jbenet>
(not sure what counts as a flood to freenode)
<richardlitt>
Cool. All good.
<daviddias>
did I miss your feedback on those repos, richardlitt ?
<richardlitt>
daviddias: I didn't have much. Opened one issue. Otherwise, ran the tests, all looked good
<richardlitt>
daviddias: I honestly don't know enough technically to be able to do much of a code review.
<richardlitt>
daviddias: I can try to do more PR reviews, but at the moment, all I can seem to be able to do is go "Yeah, that's good javascript, looks OK, no idea how to run or test this."
<richardlitt>
daviddias: Maybe I need a better tutorial on how to code review code that is outside of your technical purvue?
<daviddias>
well, that is good feedback :) if you find something alarming or antipatterns, lmk :)
amstocker has joined #ipfs
<daviddias>
richardlitt: we can set a bunch of guidelines for good repo
<daviddias>
and enforce them across the board
<jbenet>
richardlitt: good stuff. maybe let's make sure to catch up after all the discussions today and talk about all the random things.
<daviddias>
things like contributing guidelines can be really helpful for open source projects
<richardlitt>
daviddias: guidelines +1. Shall I start a PR, or you?
<dignifiedquire>
probably best to have a central one, with links in all the repos to easier keep them in sync
<richardlitt>
dignifiedquire daviddias Building a Community guidelines Readme was one of the main things I had (and failed) to do this week
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<richardlitt>
I PRed go-ipfs for the first time this week to start them having a COntribute.md, which is a set in the right direction. https://github.com/ipfs/go-ipfs/pull/1734
devbug has joined #ipfs
<richardlitt>
I'm also a member of https://github.com/contribute-md and hope to be able to use this information for them, and vice versa
<jbenet>
we keep them in one place to avoid massive duplication and getting out of sync (as dignifiedquire mentioned also)
<richardlitt>
jbenet: yep. hoping to update it if I can. There's a lot of stuff not in there - like how to contribute the best
<richardlitt>
or what repos do what, etc. Need to make a roadmap add it in there. keeping things centralised == +1
<dignifiedquire>
there is probably the need for contribution guideslines depending on the language in this case though, so maybe something like contributing-go.md and contributing-node.md
<richardlitt>
trying not to duplicate work.
<richardlitt>
dignifiedquire: +1.
apophis has quit [Ping timeout: 252 seconds]
<jbenet>
dignifiedquire: indeed +1
apophis has joined #ipfs
<richardlitt>
sweet. anything else?
<richardlitt>
If not, @dignifiedquire want to go next?
chriscool has joined #ipfs
<dignifiedquire>
sure
<dignifiedquire>
Started work only Friday, so it was more of a mini sprint for me, but I got through some stuff the on electron-app:
<dignifiedquire>
- [x] Added hot module reloading
<dignifiedquire>
- [x] Using babel and webpack for building
<dignifiedquire>
- [x] Improving stability (using safer ipc variant to avoid crashes on reload)
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
amstocker: nice! maybe post on ipfs/notes about it
simonv3 has quit [Quit: Connection closed for inactivity]
<amstocker>
yeah I am planning to write something up
<amstocker>
I have been busy with the math GRE/grad school apps ;_;
<amstocker>
all i want to do is code
<richardlitt>
That would be coo.
<richardlitt>
Thanks @amstocker
<richardlitt>
Also, GREs can be fun!
<richardlitt>
@mildred, are you around?
<daviddias>
so nice knowing we will have a python-api too
<richardlitt>
doesn't look like it. Is there anyone I missed?
CarlWeathers has joined #ipfs
<amstocker>
the GRE isn't so bad, and I actually enjoy math tests, its just the whole process of applying to graduate schools that is stessing me out atm
<richardlitt>
amstocker: I feel you. x__x. I did that for too many years.
<richardlitt>
I think that's it for the sync. 10 minutes until the first call.
<richardlitt>
Please make sure to write up your issues for the next sprint in the etherpad, and don't forget to also carry over issues from the last sprint that weren't dealt with. Sprint etherpad: https://etherpad.mozilla.org/nTqGOVynPT
<richardlitt>
Since krl is out this week, the next call will be @jbenet and @dignifiedquire talking about electron.
<richardlitt>
./end of sync
<jbenet>
thanks richardlitt! well run!
<dignifiedquire>
thanks everyone
<richardlitt>
Cheers! Happy to do it. Can't next week, going to be AFK flying back from NC to boston after hiking over this coming weekend.
<richardlitt>
Will be online this week until Friday.
<dignifiedquire>
jbenet: ready when you are
<jbenet>
dignifiedquire: thanks almost rdy. 3min.
<dignifiedquire>
k
<richardlitt>
heading to lunch and rock climbing gym. Will be around intermittently. jbenet let me know when is good to sync
<jbenet>
ok, probably after node-ipfs/go-ipfs talks.
<richardlitt>
cool. I'll join for node-ipfs
Encrypt has quit [Quit: Quitte]
<jbenet>
btw, amstocker: if you want to lead a py-ipfs hangout and so on, feel free to add one to the pm repo
<daviddias>
ahahah whyrusleeping we will even that out when you come here! :)
dosse91 has joined #ipfs
jknoll has joined #ipfs
dosse91 has quit [Client Quit]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
dignifiedquire: ack, sorry need to relocate. brb in ~5.
<dignifiedquire>
jbenet: np
nicolagreco has quit [Read error: Connection reset by peer]
nicolagreco has joined #ipfs
<amstocker>
jbenet: by py-ipfs do you mean the actual python implementation of ipfs or just a general discussion of ipfs and python?
devbug has quit [Ping timeout: 240 seconds]
<whyrusleeping>
amstocker: i beleive he means a discussion of just ipfs and python in general
<whyrusleeping>
including the api bindings, an implementation, or anything else that combines the two (would be my guess)
<amstocker>
ok cool
eternaleye has joined #ipfs
<jknoll>
Hello all, I'm trying to follow this simple IPFS tutorial and everything works with the provided index.html, but changing even a single word, re-adding, and attempting to view the new hash via the gateway.ipfs.io proxy results in "Path Resolve error: context deadline exceeded". Is there something obvious I'm missing?
<rschulman>
jknoll: Are you changing the file on your own machine and then adding from there?
captain_morgan has quit [Ping timeout: 240 seconds]
<voxelot>
jknoll: sounds like the gateway can't find your node
<jknoll>
rschulman: Yes.
<rschulman>
jknoll: Are you running the daemon on your machine?
<rschulman>
(something the author left out of that blog post)
<rschulman>
jknoll: run "ipfs daemon" if you haven't.
<jknoll>
Appears not, running…
eternaleye has quit [Client Quit]
<rschulman>
(otherwise your node isn't connected to the whole network, which is why the gateway can't find it!)
<voxelot>
try running on your local gateway as well when you have the daemon up, 'http://localhost:8080/ipfs/<hash>
<jknoll>
That was it, thanks so much for the help.
Eudaimonstro has quit [Ping timeout: 256 seconds]
<jknoll>
So does my machine need to be running the daemon in perpetuity for the content to be available? I suspect not.
Eudaimonstro has joined #ipfs
<rschulman>
jknoll: Yes, until someone else requests the file and thereby mirrors it
thomasreggi has joined #ipfs
chriscool has quit [Ping timeout: 250 seconds]
<rschulman>
but only until that other computer does a garbage collection
notduncansmith has joined #ipfs
eternaleye has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jknoll>
Can requesting it through the clearnet proxy serve as that secondary request?
<rschulman>
jknoll: There are other plans in the works to fix that problem, including a way to pay others a small amount to hold on to your hash too.
<rschulman>
jknoll: Yes, for a little while, but the gateways have a pretty aggressive garbage collection timeline, I think.
<voxelot>
my gateway is just piles up junk until i break my 25gb limit :) http://voxelot.us/ipfs/
<voxelot>
feel free to abuse my cache
pau_ramon has joined #ipfs
<pau_ramon>
Hi!
<jknoll>
voxelot: Thanks, much appreciated.
jknoll has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<richardlitt>
Happy birthday daviddias!
<daviddias>
Thank you :):)
<richardlitt>
Did I break something in go-ipfs by merging that commit? Got told the build was broken. X.X
<whyrusleeping>
richardlitt: talking about 1734?
<richardlitt>
whyrusleeping: yes
<whyrusleeping>
looks like the commit is fine
<whyrusleeping>
just a random test failure on one of the ci boxes
<whyrusleeping>
richardlitt: although sometimes you will have to be careful changing docs in the go-ipfs repo, we build them into the cod
<whyrusleeping>
e
<whyrusleeping>
and expect hashes to look correct in the tests
<richardlitt>
fair. Cool, thanks.
<jbenet>
richardlitt: pls don't merge anything on go-ipfs, whyrusleeping and i will handle that. it's a pretty critical codebase, why and i are the maintainers. (am very protective about it). i know those were docs, but it makes it easier to reason about things if only why and i are merging. ((also, usually good to give time for more LGTMs -- i hadn't seen the PR and
<jbenet>
have some feedback.))
samiswellcool has quit [Quit: Connection closed for inactivity]
<jbenet>
ok, next chat up. lgierth, ready for infra?
amstocker has quit [Ping timeout: 240 seconds]
<richardlitt>
jbenet: You had seen the PR, and commented on it. I fixed the comment and assumed it was fine since that was your only one. I'll just wait for others to merge in the future.
<lgierth>
jbenet: i'm here
* lgierth
goes to hangout
<jbenet>
richardlitt: sorry, correction: i hadn't finished commenting on it. I'll say LGTM once i think it's ready for merge.
<CounterPillow>
Relevant part: "Developers were enticed into downloading this tampered version of Xcode because it would download much faster in China than the official version of Xcode from Apple’s Mac App Store."
<lithp>
wow, such a clever attack vector
<whyrusleeping>
CounterPillow: coming soon! your data, fast and exactly what you expect it to be!
<richardlitt>
Fazo, I'm thinking - maybe one PR per question would be better?
<richardlitt>
Than we could discuss and edit each PR.
apophis has quit [Quit: This computer has gone to sleep]
<fazo>
yeah, you're right
<richardlitt>
Maybe that's a ton of hassle given the size we already have, though
apophis has joined #ipfs
<richardlitt>
What do you think? Let this in, open issues to clarify any points I have?
intern has joined #ipfs
<richardlitt>
Having Issues as questions is probably better in the long run than how I did it, anyway. Once resolved, add text to Readme.
<fazo>
uhm, maybe for now we could use the PR to discuss my additions
<intern>
i added a file to my local repo and it was accessible through ipfs gateway, then i switched of my internet and tried from another device and that file was still accessible. I can't understand how files distributes automatically in IPFS
<fazo>
then when they're in we can open an issue to discuss each question and add new ones individually
<fazo>
intern: we're working on a textbook that will do just that
<fazo>
for now I can explain briefly
<fazo>
when you add the file, your computer has it
<fazo>
then you asked gateway.ipfs.io to get the file, but of course it didn't have it
<fazo>
so it asked the network to find the file until it figured out that your computer had it
Spinnaker has joined #ipfs
<fazo>
so your computer sent it to gateway.ipfs.io
<fazo>
then gateway.ipfs.io also had the file
Encrypt_ has joined #ipfs
<fazo>
when you opened the other device, it probably got the file from gateway.ipfs.io
chriscool has quit [Client Quit]
Encrypt_ has quit [Client Quit]
<richardlitt>
hmmmm
<fazo>
this is a safe process (no one can edit the files) because if you edit a file its address changes, and every computer can calculate the address of a file, so your pc can verify authenticity of files
chriscool has joined #ipfs
<richardlitt>
fazo: good call.
<richardlitt>
Will merge, then discuss!
<fazo>
ok :)
<richardlitt>
+1+1+1
<fazo>
just wait maybe a few days
<fazo>
so I can get it polished and completed
<fazo>
meanwhile you can still make suggestions of course
amstocker has quit [Ping timeout: 246 seconds]
<richardlitt>
fazo: This looks awesome.
<richardlitt>
Merged now, will wait.
<intern>
fazo: does ipfs forces nearest nodes to store data ? i tried on a large file aswell and after few minutes it was fully streamable even when my system was off
<fazo>
richardlitt: well I just couldn't stop you :D
<fazo>
intern: no
<fazo>
intern: only nodes that download the data will store it for a while and help seed it
jfis has joined #ipfs
<fazo>
going afk for a while
<richardlitt>
heh sorry.
devbug has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
simonv3 has quit [Quit: Connection closed for inactivity]
Eudaimonstro has quit [Remote host closed the connection]
<dignifiedquire>
thanks, I’m going to add some animation effects tomorrow to improve screen transition, which should make it even smoother :)
<fazo>
dignifiedquire: is it OSX exclusive or should it work on other platforms too?
<dignifiedquire>
this should work everywhere
<fazo>
very good, it should help a lot to make it easy for people to host content
rwc has quit [Quit: Page closed]
<dignifiedquire>
yes, should make for a nice first impression hopefully :)
<dignifiedquire>
(also I want to have this myself)
<fazo>
I'm sure it will :) we just need to get go-ipfs to work fine on windows
<dignifiedquire>
I’ll leave that fun to other people :P
<fazo>
lol yeah. I can't wait to build my dream distributed reddit
<dignifiedquire>
alright I’m gonna hit the bed now, everyone enjoy their night/day
pfraze has quit [Remote host closed the connection]
<fazo>
good night!
nicolagreco has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
jamescarlyle has quit [Remote host closed the connection]
jfis has quit [Quit: s]
nicolagreco_ has joined #ipfs
nicolagreco has quit [Read error: Connection reset by peer]
nicolagreco_ is now known as nicolagreco
<lithp>
fazo, a distributed reddit is something I'd love to see too
<Vyl>
Sound like anarchy.
<lithp>
do you have any idea how voting would happen?
notduncansmith has joined #ipfs
<fazo>
yep, I thought about it for a while. When you vote you publish a file which is your vote
<fazo>
everything a user does is signed ofc
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
if your client sees multiple votes on the same post from the same client it only counts one
nicolagreco_ has joined #ipfs
<fazo>
basically every user exposes all their posts and votes and then there are communities (subreddits) which are named like this: community_name:user_id
<fazo>
user_id is the user that owns it
<fazo>
and only data regarding that community signed from him is trusted
<lithp>
ahh, that's a neat idea, have lots of small dictatorships
<fazo>
ahahah yes, then the users can vote by posting on the one they prefer
<fazo>
and the owner can use css files to customize how the viewer renders the community
<fazo>
similar to reddit
jfis has joined #ipfs
apophis has quit [Quit: This computer has gone to sleep]
<fazo>
the problem will be figuring out the most efficient way to store the data
<lithp>
how would you prevent someone from making lots of fake accounts to vote up items?
nicolagreco_ has quit [Read error: Connection reset by peer]
<fazo>
lithp: nice question
nicolagreco_ has joined #ipfs
<fazo>
I'll figure it out I hope
<intern>
would a tag based search work fine on ipfs ?
nicolagreco has quit [Ping timeout: 256 seconds]
nicolagreco_ is now known as nicolagreco
<fazo>
intern: there is no search engine at the moment
<fazo>
intern: but indexing ipfs is possible
<fazo>
so you can build one if you want :)
<ion>
fazo: Perhaps a web of trust could be used. That would mean everyone would not use the same votes though.
apophis has joined #ipfs
<intern>
i am working on a distributed video portal, new to ipfs
<fazo>
ion: that's a solution, but I was thinking some kind of proof of work
* whyrusleeping
doesnt like proof of work
<fazo>
yeah I know it's bad expecially for a web app
<fazo>
need to figure out something else. If I implement a private messaging into it, I can make it so you have to request to join a community
<fazo>
that way, no bot problems
<fazo>
I hope
Encrypt has quit [Quit: Sleeping time!]
rendar has quit []
apophis has quit [Quit: This computer has gone to sleep]
warner has joined #ipfs
<lithp>
I think proof of work also only works if you're bitcoin
<lithp>
you want most of the computing power in the world to be trustworthy
chriscool has quit [Ping timeout: 264 seconds]
<lithp>
if it's just most of the computing power in your network, you've raised the bar an attacker has to climb, but there are a ton of attackers out there capable of beating your network
Eudaimonstro has joined #ipfs
apophis has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
apophis has quit [Client Quit]
<lithp>
fazo The best I've come up with so far also involves limiting the size of communities, or at least not letting random people join and start voting :(
<spikebike>
lithp: well capable of beating your network != worth it
<spikebike>
I suspect spam for instance would not be finanically feasible if each email cost a CPU minute
<fazo>
lithp: it wouldn't limit the size of communities, it would also solve problems such as brigading
<fazo>
lithp: and other "wars"
<fazo>
lithp: admins could release signed statements that the client can read to allow permits signed by moderators to be considered valid
<fazo>
so that admins can assign moderators to help deal with huge amount of people joining communities
<fazo>
or instead of messaging the admins, to join you have to make one or more posts (marked as "outsider posts" in the clients) until some moderator allows you to join
<fazo>
then votes are only counted from members
<fazo>
outsiders can still comment and post but their stuff can be filtered out
<fazo>
there's still the issue that some guy can create 902489343294 identities and make one post from each
<fazo>
flooding the community
<fazo>
flooding the moderators and admins
<spikebike>
fazo: ya, there's where proof of work becomes handy
<fazo>
yeah looks like light PoW is going to be required... too bad
<spikebike>
fazo: check out bitmessage to see how they handle these things
<fazo>
yeah I know how it works :)
<spikebike>
ah, k
<fazo>
I got inspiration from that. I had already thought about PoW but wanted to see if I could avoid it
<spikebike>
well IPs are costly so that's likely the second best way, if you block an IP it's not easy for most to use a different one
<spikebike>
although IPv6 makes that more difficult
<fazo>
how can you block an IP in a service that runs over ipfs?
<spikebike>
my consumer internet connection has 2^68 IPs ;-)
<fazo>
their data may come from another node's IP
<fazo>
how do you tie together a user and its IP
<fazo>
it's almost impossible
<lithp>
spikebike: I wonder how much an ad on reddit costs, compared to how much running a CPU for an hour costs
<spikebike>
fazo run ipfs swarm addrs
<fazo>
spikebike: oh, I forgot about that
<fazo>
that's why you don't do stuff alone.
<fazo>
ok, so communities can IP ban people. I should write this stuff down. Anyone want to contribute?
<whyrusleeping>
jbenet: what would a good escape character be for ipfs commands?
nicolagreco has quit [Read error: Connection reset by peer]
nicolagreco_ has joined #ipfs
<whyrusleeping>
i want to be able to write 'ipfs dht get /ipns/<hash>' and have it turn the /ipns/<hash> into the appropriate key
<whyrusleeping>
so i'm thinking of maybe something like 'ipfs dht get @/ipns/<hash>'
<whyrusleeping>
and it would also work for other things like 'ipfs dht get @/pk/<hash>' to retrieve someones public key
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
could someone running IPFS try this command, it transfers a minimal amount of data
<spikebike>
mkdir tmp; cd tmp; time for i in `ipfs cat /ipfs/QmRMEsnkVXkC4FFzCw4RDfeb4bY2Zgr8szFFsgZgGuA1g2`; do ipfs cat $i; done
<spikebike>
each of the 32 files is just a byte or two
jknoll has joined #ipfs
<fazo>
hey, you could put a shellshock exploit in one of them. Too bad it has been fixed practically everywhere
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<spikebike>
yeah there is a bit of trust involved, the content of those files is just for i in `seq 1 32`; do echo $i > $i; ipfs add $i; done
<fazo>
why does it switch to a tmpdir?
<spikebike>
ha, oops
<spikebike>
yeah that's not needed, I needed a tmp to create the files, not read them
<spikebike>
can skip that part
<fazo>
I'm doing it and it works
<spikebike>
what did the output of time say
<spikebike>
I'm interested in the first run (not cached), not any additional runs.
jedahan has joined #ipfs
<spikebike>
something like real0m0.316s
<spikebike>
user0m0.153s
<spikebike>
sys0m0.114s
<fazo>
while fixing whitespace I accidentally removed it lol
<fazo>
it also has problems terminating though
<fazo>
by the way about 1 second for each item
<spikebike>
ah, really, that wasn't expected
<spikebike>
so 32 seconds or so to run
<spikebike>
?
<fazo>
yes, a little bit more
<fazo>
I think it's fast enough considering I got one of the worst internet connections in europe.
<fazo>
maybe you aren't even in europe seeding those files
<spikebike>
well I expected the first to be slow, then you'd find one of the peers seeding, then faster for the rest
<fazo>
spikebike: it can't do that because you didn't give ipfs a list of links
<fazo>
to ipfs those hashes were all unrelated
<fazo>
if you ls a folder then try to get a file it probably guesses the guy that has the folder also has the contents
<fazo>
if it doesn't, it should
<spikebike>
right, they are unrelated, but all seeded by the same peer
<fazo>
ipfs still has to figure out who has them
<spikebike>
this script is somewhat safer to run
<spikebike>
time for i in `ipfs cat /ipfs/QmRMEsnkVXkC4FFzCw4RDfeb4bY2Zgr8szFFsgZgGuA1g2`; do ipfs cat $i &> /dev/null; done
bsm1175321 has quit [Ping timeout: 246 seconds]
<spikebike>
fazo: sure, this the value of the test, that is apparently how it works.
<fazo>
I have 124 connections
<fazo>
it has to send a request to all of them
<whyrusleeping>
i had a service at one point that would cause one of my nodes to add a small amount of random data and add it to ipfs
<whyrusleeping>
returning you the hash
<whyrusleeping>
so you could do 'time ipfs cat (curl mythi.ng/randomhash)'
apophis has joined #ipfs
apophis has quit [Client Quit]
steve__ has joined #ipfs
jfis has quit [Quit: s]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Eudaimonstro has quit [Ping timeout: 246 seconds]
<spikebike>
whyrusleeping: heh, yeah I was thinking similar, didn't find a better way to find the hash than ipfs add foo | awk '{print $2}'
<spikebike>
fazo: btw, it took my someone local nodes under 1 second
<spikebike>
(total)
<whyrusleeping>
spikebike: 'ipfs add -q foo' prints just the hash btw
<spikebike>
whyrusleeping: ah, useful, thanks
simonv3 has joined #ipfs
nessence has joined #ipfs
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
ocdmw has quit []
<fazo>
spikebike: wow
ocdmw has joined #ipfs
<spikebike>
the two ipfs nodes are 15 hops or so from each other, about 14us
nicolagreco_ has quit [Ping timeout: 260 seconds]
jedahan has joined #ipfs
jknoll has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<spikebike>
I'm interested in another datapoint if anyone else wants to run the script
<spikebike>
time for i in `ipfs cat /ipfs/QmRMEsnkVXkC4FFzCw4RDfeb4bY2Zgr8szFFsgZgGuA1g2`; do ipfs cat $i &> /dev/null; done
patrickod is now known as pod
pod is now known as patrickod
jedahan has joined #ipfs
patrickod is now known as pod
pod is now known as patrickod
patrickod is now known as p7d
jedahan has quit [Client Quit]
kvda has joined #ipfs
p7d is now known as pod
<ipfsbot>
[go-ipfs] whyrusleeping created ipns/patches (+3 new commits): http://git.io/vn0Qj
<ipfsbot>
go-ipfs/ipns/patches ae3f775 Jeromy: ipns record selection via sequence numbers...
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #1739: Ipns/patches (master...ipns/patches) http://git.io/vn07Z
<voxelot>
It is not possible to make a JSONP POST request. :(
<voxelot>
thought about hacking cors for a second
<whyrusleeping>
voxelot: hm?
<voxelot>
you can pad json objects with a function call and then run cliet <script src="http://voxelot.us/api/v0/cat?<hash>"> and it wall call the function at the data endpoint and do a get request with CORS
<voxelot>
but not post
<voxelot>
without* CORS
devbug has quit [Ping timeout: 272 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed ipns/patches from a08c8ed to 3b29e26: http://git.io/vn0bZ