<user24>
alu: Nice :) Just create an issue on github and I'll fix it.
<alu>
:D okay
taw00 has quit [Read error: Connection reset by peer]
taw00 has joined #ipfs
afisk has joined #ipfs
afisk has quit [Remote host closed the connection]
afisk has joined #ipfs
afisk has quit [Remote host closed the connection]
<sega01>
:-D i just wrote a patch for surf to support ipfs in the browser. it's hacky, but seems to support ipfs://Qsomethingsomething
<zero-one>
you... you actually use surf?
<sega01>
no, not really
<sega01>
i wanted to make a proof of concept. i use it on occasion and used to use it more
<sega01>
hmmm. easier approach. i wonder if you could make a url handler for ipfs:// that would call chrome in turn? maybe just to translate the url to http://127.0.0.1:8080/ipfs/Q...
nycoliver has joined #ipfs
<sega01>
shoot. only seems to work for direct urls
<sega01>
embeded hrefs don't seem to work
MahaDev has quit [Quit: Leaving]
user24 has quit [Quit: ChatZilla 0.9.92 [Firefox 45.0.2/20160413222640]]
pfraze has quit [Remote host closed the connection]
<deltab>
sega01: could it also work for /ipfs/?
<sega01>
how do you mean?
<deltab>
g_str_has_prefix(uri, "/ipfs/")
<sega01>
i wouldn't consider /ipfs/ a uri. and it'd have sideeffects, i believe
<deltab>
ah
<sega01>
that function only works for when you call a url like: surf ipfs://Qsss
<sega01>
i think you'd need to modify webkit for it to actually do anything like that with rendered links
<sega01>
and by that point, i'd rather throw in a real ipfs client into the browser, if it could work well enough and be secure
<sega01>
probably have one sandboxed ipfs client process and simple validation once it gets a file to do a hash match and then show it to the client if it looks good
<sega01>
although, that wouldn't work for video
<lgierth>
sega01: there's a firefox addon for fs:/ipfs/ and fs:/ipns/
<lgierth>
a chrome addon too but that one's not frequently maintained recently
<lgierth>
the firefox addon is great though <3
<lgierth>
we settled on fs: after long discussion
<deltab>
because having file:, filesystem:, afs: and nfs: wasn't enough :-)
<sega01>
i see. so the uri is: fs:/ipfs/Q...?
<lgierth>
yeah
<lgierth>
and the addon let's you "redirect" that to ipfs.io or your own daemon
<sega01>
gotcha. nifty. too bad there isn't a primary/secondary href function, is there?
<lgierth>
heh yeah
<lgierth>
the addon is kind of a proof of concept to eventually get fs: integrated into browsers
<sega01>
it is pretty cool. seems to work for me
<sega01>
my ipfs client seems to lock up all the time on uncached objects :(
<sega01>
working now. ipfs.io and the dns txt hack is not working
<sega01>
but some cached objects are alright
<sega01>
have a good night!
KatzZ has joined #ipfs
a1uz10nn has quit [Read error: Connection reset by peer]
KatzZ has quit [Quit: Bye]
pfraze has joined #ipfs
Oatmeal has quit [Ping timeout: 250 seconds]
ggp0647 has quit [Ping timeout: 264 seconds]
ggp0647 has joined #ipfs
<whyrusleeping>
Stskeeps: it wasnt travisperson, he's been sitting next to me for the last few hours
pfraze has quit [Remote host closed the connection]
<whyrusleeping>
er, sega01
<whyrusleeping>
o/
Oatmeal has joined #ipfs
<Stskeeps>
whyrusleeping: odd; anyway, weirdest thing to wake up at 4am to notice :P
ggp0647 has quit [Ping timeout: 264 seconds]
ggp0647 has joined #ipfs
anshukla has quit [Quit: Leaving...]
nycoliver has quit [Ping timeout: 244 seconds]
Oatmeal has quit [Ping timeout: 250 seconds]
pfraze has joined #ipfs
pfraze has quit [Remote host closed the connection]
<dignifiedquire>
daviddias: left a comment for you
<dignifiedquire>
we still need a way to to tell bitswap a) the currently connected peers, b) when a new peer is connected c) when a peer is disconnected
<daviddias>
nice, we didn't overlap :) I've started with the part of registering the hashes we are interested and ability to cancel
Boomerang has joined #ipfs
dmr has joined #ipfs
ashark has quit [Ping timeout: 260 seconds]
chriscool1 has joined #ipfs
afisk has joined #ipfs
conway has joined #ipfs
<conway>
just got a new mini-project done. IPFS-ified a client only node-red in the browser (use chrome) /ipfs/QmSCrZUPkqH4gsncsD7tGXPiFpymsbwmywsHXLDhBtSQ6T
corvinux has joined #ipfs
corvinux has joined #ipfs
afisk has quit [Ping timeout: 252 seconds]
<VictorBjelkholm>
conway, I'm trying it, I have no idea what it is though
<conway>
it's a internet-of-things drag and drop javascript engine. Try something like "Voice rec"--- "Debug" and allow microphone on when chrome asks.
<conway>
then click "DEPLOY" in the upper right, and the script runs in the browser.
<conway>
Normally, with Node-Red, the backend handles 'flow execution'. But in this case, it's 100% client side, therefore suitable for IPFS.
ashark has joined #ipfs
<VictorBjelkholm>
conway, oh, now I understand. Need to have the console open as well to see any output as well it seems
<conway>
in the case of the "Debug" node, you do. But you could easily hook it up to "espeak" plugin and have what you say spoken back.... Or have a function that watches for keywords and does stuff :)
<VictorBjelkholm>
conway, hm, I see. Well, hard for me to give feedback without figuring out any use case for it so i'll just leave it at that
<conway>
that's fine :) I just use Node-Red a great deal, with my IoT infrastructure I've built. IPFS provides that last step of a pervasive 'infinite' storage across my cloud.
pfraze has quit [Remote host closed the connection]
<mnp>
@conway, i'm in the high scale iot industry. what's your infrastructure like?
<conway>
I have 3 different areas where my sensors are: Home, hackerspace, art studio. I use Tor Hidden Services to create a pervasive flat network across all my machines, and figured out how to get .onion resolution for any program on Linux.
<conway>
I use Mqtt (mosquitto), Node-Red, MongoDB, and Tor for my full stack.
<dignifiedquire>
daviddias: great, I'm finishing the internal wantlist right now
<conway>
And once everything is configured with .onion addresses, it doesn't matter where those machines physically are, as long as I have a routable IP address (even if natted)
<mnp>
nice
<mnp>
that's a lot of integratin
<mnp>
so that's all running on each of your devices?
ylp1 has quit [Quit: Leaving.]
<conway>
I have Tor hiddenservice:22 on every machine, as well as a modification that allows the DNS resolver to resolve .onion links seamlessly (without going through tor browser/proxy)
corvinux has quit [Ping timeout: 240 seconds]
<conway>
I have 1 machine that runs my broker. It also happens to by my home IoT controller, so it also necessitates Node-Red. I have Mongo running on crankylinuxuser.net via a HS.
<haad_>
richardlitt: what time do we have our hangout? in 30mins or 1.5hrs?
nrw has quit [Quit: Connection closed for inactivity]
<conway>
mnp: The Tor piece means a lot of things: 1. No monkeying with nats, ip addresses, dyndns setup, port forwards 2. security/anonymity 3. guaranteed data destination (or failure) 4. all .onion machines can be considered topologically as ethernet hub
<conway>
4 is interestingly shared with ipfs, where every key hangs off of /ipfs/ , and also looks as topologically flat. Of course, please correct me if I'm wrong :)
<dignifiedquire>
daviddias: pushed wantlist
<dignifiedquire>
where does your code live?
<daviddias>
I've it locally
<dignifiedquire>
wanna share? ;) just so I get an idea what you are doing cause I need to plan how I will continue
<mnp>
conway: I'm not sure, but I know ipfs has been having trouble with nat traversal. I think there is an issue open to use Tor transport, which might be nicer than the IP ones for that reason alone.
<daviddias>
you are way ahead of me :)
<richardlitt>
haad_: I believe in 30 minutes.
<conway>
mnp: One way I'd handle that is to traverse through Tor, open a connection from 'inside nat to outside'... but that blasts away any privacy/anonymity of tor. In my purpose, i'm not dead-set on the anonymity as it's a secondary purpose.
<dignifiedquire>
daviddias: okay moving on to the intersting bits then, the decision engine
<richardlitt>
Mid-week hangout will be happening in 25 minutes, at 12:00noon EDT, 9:00am PDT, 6:00pm CEST, 4:00pm UTC.
<Icefoz>
Also, while I'm here, is there anywhere to go to learn more about the status of filecoin? It sounds really useful but the webpage only seems to have a white paper...
<dignifiedquire>
will update you as I go along
<daviddias>
ok :)
sivachandran has joined #ipfs
<sivachandran>
whyrusleeping: how can I add ipfs dependency package through gx? it looks like i have to add the package directory to ipfs first and add the directory hash to package.json, am i right?
cryptix has quit [Quit: leaving]
<noffle>
morning ipfstronauts
<daviddias>
Mornin' :)
Not_ has joined #ipfs
<daviddias>
dignifiedquire: did you get access to the repo?
<dignifiedquire>
daviddias: not yet, but whyrusleeping said he will do it soon
<daviddias>
ok :)
<Icefoz>
Hmmm. Does IPFS offer any particular way to communicate with peers, such as a messaging protocol? Filecoin may or may not be going anywhere, but it shouldn't be too difficult to make some manner of distributed cache system...
<noffle>
Icefoz: you could build this on top of ipfs
<conway>
Icefoz: are you thinking of something like Tox baked in to the pub.priv key generated?
<Icefoz>
noffle: I intend to build it on top of ipfs, peers just need to be able to talk to each other to figure out whether a remote peer has a file a local peer should cache or such.
<Icefoz>
My goal is basically to be able to say "I will give X GB of storage to the network in exchange for having Y GB of my own stuff stored with a certain certainty."
pfraze has quit [Remote host closed the connection]
<conway>
right now, I've an open offer to host stuff on my end. main machine is backed by 1GBps, along with a few other VPSes
pfraze has joined #ipfs
Iiterature has joined #ipfs
<dignifiedquire>
whyrusleeping: are you coming?
cjb has joined #ipfs
Ronsor` has quit [Ping timeout: 260 seconds]
Ronsor` has joined #ipfs
<richardlitt>
Starting now!
conway has quit [Ping timeout: 250 seconds]
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
Oatmeal has joined #ipfs
Ronsor` has quit [Remote host closed the connection]
Ronsor` has joined #ipfs
munksgaard has quit [Ping timeout: 246 seconds]
Ronsor` has quit [Ping timeout: 268 seconds]
herzmeister has joined #ipfs
Aeon has quit [Ping timeout: 244 seconds]
Aeon has joined #ipfs
<richardlitt>
thanks all!
<voxelot>
thanks richardlitt!
Boomerang has quit [Quit: Leaving]
<haad_>
whyrusleeping: so the mem leaks. I've been running ipfs daemon + orbit on DO with 512mb mem. 0.3 ran fine in terms of mem, but 0.4 uses up the 512mb pretty much as soon as some communication happens with peers (object get/put), so it's a bit annoying. the only solution I have now is to ramp up the DO node to whatever mem is needed, my local daemon seems to stay around 1-1.5gb mem throughout the day so I reckon 2gb is what I need.
Not_ has left #ipfs ["Leaving"]
libman has joined #ipfs
<haad_>
whyrusleeping: well, tbh, not sure if they're mem leaks. just using a lot more mem than previously.
<libman>
Hi all. I'd like to resume the conversation we started yesterday about how IPLD relates to storing data as WebComponents (which is pretty much XML).
<whyrusleeping>
haad_: IMO, 'using more memory than normal' == leaks
<noffle>
voxelot: I think you can glean my knowledge of ipld from the docs on github
<voxelot>
noffle: thanks, i think i have a good understanding of ipld, just missing the link as to why it is good for pub/sub
<noffle>
voxelot: I don't think ipld is inherently geared toward pubsub -- not sure what you mean
Akaibu has joined #ipfs
<voxelot>
hmm maybe im mistaken, i thought i saw somewhere in an issue jbenet saying that pubsub would need ipld definitions first
Encrypt has joined #ipfs
<noffle>
might be nice to have(?), but we can definitely do pubsub with the protobuf format we have today
<voxelot>
right, that is what i was thinking, trying to think how linking would add to the system
taw00 has quit [Ping timeout: 246 seconds]
conway has joined #ipfs
palkeo has joined #ipfs
palkeo has joined #ipfs
taw00 has joined #ipfs
<conway>
Hmm. Is there a way to easily get the output root hash of a share into another program? Say, for easy IPNS publishing?
<whyrusleeping>
of course, the internet comes back right after the hangout ends
<noffle>
conway: get the output root hash of what?
<conway>
if I "ipfs add -r /foo", is there an easy way to get the root hash of /foo ?
<noffle>
conway: it should output it
<noffle>
the last hash
<conway>
It does, with other helpful info as well. I can strip it easy enough with grep :)
<mythmon>
conway: you might consider `ipfs add -q -r /foo | tail -n 1`, instead of grep
<conway>
awesome. thank you :)
<mythmon>
-q makes it print only the hashes, without the extra data, which makes it easier to use programmatically
<conway>
that's what I was missing :)
<libman>
IMHO `ipfs add` should have an option to create an .ipfs dotfile in every directory it works through. This TSV dotfile would contain the directory / filename and the hash, and not be placed on IPFS itself. That way you can easily look up hashes of files without rehashing them.
<libman>
Maybe also the timestamp of the hashing, which subsequent `ipfs add` runs could use to quickly skip unmodified files.
<deltab>
I'd prefer it to be stored separately, allowing for quicker access and supporting read-only media
<Icefoz>
libman: It shouldn't be in the directory it works through, it should be cached in ~/.ipfs/
<Icefoz>
Or... what deltab just said.
<mythmon>
for writable directories, i don't think that could be trusted. you'd want to verify it by hashing files, and that's most of the work anyways
M-fs_IXlocWlFZHF has joined #ipfs
M-fs_IXlocWlFZHF has left #ipfs ["User left"]
<conway>
mythmon: you can trust your own computer, right? By storing hashes of files you know aren't going to change, I can convert every file I have into a potential read-store for IPFS, even if I choose not to share it out.
<conway>
For example, I have a 500 GB 'music and movie' hard drive. I compute hashes for everything. Now, if I see links matching any of those files' hashes, I pull it locally.
<mythmon>
conway: that's different
<conway>
It's how Kazaa and some of the not so nice filesharing clients did it. Scan, precompute, and send to dHT supernodes for processing.
<libman>
Icefoz: I'm fine with that. There are pros and cons to both, (A) a dotfile in same directory as the physical files read by `ipfs add` and (B) one big datafile for all directories under $IPFS_PATH. The advantage of A is that it's simpler to read with existing Unix tools, for B you'd need additional commands to look up the IPFS status of a given physical file.
<Icefoz>
The disadvantage of A is that it's easier to get in the way of existing Unix tools, such as accidentally tar'ing it when you don't intend to. Or rsync'ing it.
<Icefoz>
See: the Thumbs.db file created by the Windows image viewer and all the annoying places it shows up.
<conway>
.DS_Store is annoying, as well as Thumbs.db
<conway>
:)
<libman>
It would also be nice to have a daemon that monitors directories for added / modified files and automatically adds changes to IPFS.
<Icefoz>
inotify!
<Icefoz>
Or something.
<libman>
Yup. I even wonder if we can pitch IPFS integration to the developers of OpenZFS, Hammer2 (I'm a BSD zealot), etc. But that's looking too far ahead.
<conway>
hmm.. Is there a way to, at slower speds, share a directory from the native fs without doing an 'ipfs add -r'
<conway>
maybe, in conjunction with the atime attribute, to watch if files were written to recently?
_Vi has quit [Ping timeout: 260 seconds]
Boomerang has joined #ipfs
<Icefoz>
libman: That seems like the wrong problem domain... IPFS doesn't really care _how_ files are stored, which is what filesystems are for.
jaboja has quit [Ping timeout: 250 seconds]
zorglub27 has quit [Ping timeout: 250 seconds]
<libman>
Many FS'es already store hashes of the files. If it would do this the IPFS way, it would be super-fast to look up files by IPFS hash.
CBm has joined #ipfs
<Taek>
jbenet: can I get your opinons on muxado, muxado2, and more specifically muxado2 vs yamux?
<dignifiedquire>
daviddias: should I just start merging into master on js-ipfs-bitswap? or do you want to have sth working first and review it?
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
jedahan has joined #ipfs
sivachandran has quit [Quit: Connection closed for inactivity]
matoro has quit [Ping timeout: 260 seconds]
<ipfsbot>
[go-ipfs] whyrusleeping created feat/0.4.1-changelog (+1 new commit): https://git.io/vwzJ8
<ipfsbot>
go-ipfs/feat/0.4.1-changelog a2f5294 Jeromy: add changelog for v0.4.1...
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #2602: add changelog for v0.4.1 (master...feat/0.4.1-changelog) https://git.io/vwzJ7
jaboja has joined #ipfs
matoro has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to feat/0.4.1-changelog: https://git.io/vwzUh
<ipfsbot>
go-ipfs/feat/0.4.1-changelog 57238bb Jeromy: ipfs version 0.4.1...
dguttman has joined #ipfs
zorglub27 has joined #ipfs
jaboja has quit [Ping timeout: 268 seconds]
Looking has joined #ipfs
rendar has quit [Ping timeout: 252 seconds]
jaboja has joined #ipfs
rendar has joined #ipfs
s_kunk has joined #ipfs
_whitelogger has joined #ipfs
conway has quit [Quit: Page closed]
achin is now known as Tunadrain
conway has joined #ipfs
<daviddias>
dignifiedquire it will be easier to review a PR
<daviddias>
than code on master
<dignifiedquire>
daviddias: it will be a very large pr :P
<daviddias>
I'll open it up and tinker with it too
mildred has joined #ipfs
ygrek has joined #ipfs
Encrypt has quit [Quit: Quitte]
Tunadrain is now known as achin
<ipfsbot>
[go-ipfs] whyrusleeping merged feat/0.4.1-changelog into master: https://git.io/vwznt
<ipfsbot>
[go-ipfs] whyrusleeping deleted feat/0.4.1-changelog at 9e3b5c3: https://git.io/vwznm
<ipfsbot>
[go-ipfs] whyrusleeping tagged v0.4.1 at 1fba09c: https://git.io/vwznc
corvinux has joined #ipfs
corvinux has joined #ipfs
corvinux has quit [Changing host]
Boomerang has quit [Quit: Leaving]
sametsisartenep has quit [Quit: May the force be with you]
<edrex>
whyrusleeping: we're hacking at google fremont, feel free to come by
<whyrusleeping>
edrex: ooooh, i'll finish up this release stuff and head over :)
<whyrusleeping>
edrex: wanna pm me more details on location?
<edrex>
awesome! heading to lunch in a few (just another part of the building)
<edrex>
yeah, will do
dignifiedquire has quit [Remote host closed the connection]
<edrex>
Google Seattle entrance for hacking is on the north side of the 601 N 34th St Parkview building, across from the PCC Natural Markets grocery store.
<edrex>
There is no parking on site. There are a number of paid lots in the area. Certain free street parking opens up after 6pm (check meter signs).
<edrex>
Bike racks can be found in front of PCC and Red Door as well as on N Northlake Way between the two Google office buildings.
<edrex>
oops..
Akaibu has quit [Ping timeout: 276 seconds]
<edrex>
ah well. there you go ⬆
Akaibu has joined #ipfs
<whyrusleeping>
lol
<whyrusleeping>
thanks :)
<whyrusleeping>
!pin QmWbNgHZsBiwnVhu5FNahK2VdaV2nMH7gHw2E7tGeJ6GKx new distributions page
<pinbot>
now pinning /ipfs/QmWbNgHZsBiwnVhu5FNahK2VdaV2nMH7gHw2E7tGeJ6GKx
rendar has quit [Ping timeout: 246 seconds]
M-eternaleye has quit [Changing host]
M-eternaleye has joined #ipfs
<edrex>
whyrusleeping: we're grabbing lunch on the waterfront side, back in the room in 20 or so
dignifiedquire has joined #ipfs
conway has quit [Ping timeout: 268 seconds]
rendar has joined #ipfs
Tv` has quit [Remote host closed the connection]
corvinux has quit [Ping timeout: 260 seconds]
PrinceOfPeeves has joined #ipfs
Akaibu has quit [Remote host closed the connection]