<muvlon>
personally I think it's a bit obnoxious, but the idea sounds neat
<AniSky_>
muvlon You want to know what buzzwords/hype is?
<AniSky_>
Oh wait there are Javascript fans in here.
AniSky_ is now known as AniSkywalker
<muvlon>
that's an understatement if I ever heard one :D
<muvlon>
the ipfs community is 90% node people for some reason
<AniSkywalker>
But yet it's in Go...
<AniSkywalker>
which is a much better choice, but it's interesting.
<muvlon>
well, the first implementation is
<AniSkywalker>
first implementation + CLI
<muvlon>
there's now a js implementation
<muvlon>
I think you can do cool web stuff with ipfs
<muvlon>
but I hope this will not turn into the sole focus of the project :S
<AniSkywalker>
Right, but I also think that's the only reason for IPFS in js.
<AniSkywalker>
For the browser.
<muvlon>
yeah
mguentner has quit [Quit: WeeChat 1.7]
mguentner has joined #ipfs
yar_ is now known as yar
<AniSkywalker>
Has it occurred to anyone that we could make protocol handlers at the OS level and bypass the browser nonsense?
<MikeFair>
You mean FUSE
<MikeFair>
or ipfs://
<MikeFair>
Using IPLD I even figured out how to make a real hard drive
<MikeFair>
AniSkywalker: So I guess the short answer to your question is ; "sure has" :)
<MikeFair>
err IPLD/IPNS
john1 has joined #ipfs
<AniSkywalker>
Then why don't we make a desktop app with a basic GUI that says whether or not the daemon is running, and if it is running show the localhost:5001/webui page, and have it register an OS-level protocol handler?
<AniSkywalker>
Why are they bothering with browser extensions?
<muvlon>
the cli works fine for me so far
<muvlon>
but sure, somebody could make a GUI
<MikeFair>
probably expertise
<muvlon>
^ that too
<AniSkywalker>
I mean it takes an hour to make an electron app.
<MikeFair>
they'd welcome the commit
<AniSkywalker>
We should do what docker does.
<MikeFair>
I assume electron is a JS UI thingy
<AniSkywalker>
Except do it better.
<muvlon>
what kind of protocol handler are you envisioning?
<MikeFair>
AniSkywalker: You mean ipfs://
<MikeFair>
right?
<AniSkywalker>
'/ipfs/[hash]' typed into the browser.
<AniSkywalker>
Since that doesn't resolve to a HTTP protocol, the browser consults the OS (in case it's a file path)
<AniSkywalker>
Or turns it into /a/b...
<AniSkywalker>
wait...
<muvlon>
i'm fine with file://ipfs/garbage
<AniSkywalker>
that means we just have to get Fuse packaged
<AniSkywalker>
Dropbox manages it somehow, so can we.
<MikeFair>
(and fuse doesn't work with windows atm. there's a FUSE for windows project ; but I'm not sure they know about it)
MDude has joined #ipfs
<AniSkywalker>
No, so we have to write a kernel extension :)
<MikeFair>
No you don't; that's what FUSE is all about; make a new FS in userspace
<MikeFair>
I'm also talking about looking at making a virtual hard drive
<MikeFair>
So as far as teh OS is concerned it's USB Storage or something
<muvlon>
how would you access the hard drive?
wallacoloo_____ has quit [Quit: wallacoloo_____]
<MikeFair>
mount it ; it's a virtual USB device
<muvlon>
I mean, it would be a block device, right?
<MikeFair>
yep
<muvlon>
what does block 0x4068 or something mean for ipfs?
<MikeFair>
ipfs volume create [somesize]
Boomerang has quit [Quit: Lost terminal]
<MikeFair>
basically an ipns lookup with some IPLD trickery
<muvlon>
how does it map to ipfs hashes?
<MikeFair>
The top level volume is ipns
<MikeFair>
below that is an IPLD tree of block ranges
<AniSkywalker>
I was joking about the kernel, but we might actually need it for an OS level protocol.
<kythyria[m]>
> Has it occurred to anyone that we could make protocol handlers at the OS level and bypass the browser nonsense?
<kythyria[m]>
You could, but 1) browsers will like that even less, and 2) the only thing that maps well onto posix or windows filesystems is things designed to look a lot like FAT, NTFS, or ext2.
wallacoloo_____ has joined #ipfs
<MikeFair>
^Hence my focus on a block device
<muvlon>
imo, ipfs still maps _much_ better to a regular old filesystem than it maps to a block device
<MikeFair>
muvlon: Not as easily
<MikeFair>
Well sort
<MikeFair>
it maps well to inodes
<MikeFair>
inodes = entries in IPLD
<MikeFair>
It looks like a filesystem but it's way too dynamic to be a FS
<AniSkywalker>
What I wanted mainly was for the IPFS daemon to intercept calls to the OS to resolve '/ipfs/X' and return the appropriate data.
<kythyria[m]>
muvlon: it does, but it's not as like ext2 on a local disk as people think it is.
<muvlon>
many fileystems mounted on my computer right now don't behave like ext2 on a local disk at all
<MikeFair>
hehe mho is that ipfs, once you "get" CAS is nothing like traditional filesystems; the entire assumption is just backwards
<MikeFair>
You don't put data into files with ipfs; ipfs tells you where that file is located
<muvlon>
how is that like a block device at all?
<kythyria[m]>
(for one thing, the magic number thing alone assumes random access is cheap)
<muvlon>
with a block device, you write to specific locations
<MikeFair>
muvlon: Right; so that's where the DAG comes; you create, essentially a JSON object, with the block numbers
<kythyria[m]>
muvlon: True, but the API and common usage basically assumes reasonably fast local disks.
<MikeFair>
muvlon: Then as the suff changes you update the links in the JSON file
<muvlon>
kythyria[m], I was responding to MikeFair's idea, sorry
<MikeFair>
kythyria[m]: I'm more emulating a USB 1.0 flash drive
<muvlon>
MikeFair, and where does that get you?
<kythyria[m]>
And most of the filesystems mounted on a typical linux machine are IPC pretending to be a filesystem.
<MikeFair>
muvlon: You then keep that json object published to an ipns node if you want the volume to be public
<MikeFair>
muvlon: The OS says "stick this data in block XYZ"; the fs drvier sticks in the data and gets the CID for it; it then updates entry XYZ with a link to the CID
<muvlon>
kythyria[m], they can be both
<muvlon>
using the filesystem for IPC is an old hat, way back from UNIX
<MikeFair>
muvlon: so what that gets you so something that looks and acts like a block device on top of CAS
<kythyria[m]>
muvlon: It's not actually a good IPC system, but yes. And it's still local.
<muvlon>
doesn't have to be local
<muvlon>
just look at 9p
<kythyria[m]>
9P is a terrible RPC protocol disguised as a reasonable remote filesystem protocol.
cyanobacteria has joined #ipfs
<AniSkywalker>
So here's what I notice: I think it's possible for IPFS to register ipfs:// at the application level through a bundled app.
<MikeFair>
Hmm, alright, to make it straightforward: despite the fact an actual implementation would use a string instead of a linear list; here's what the DAG would essentially be:
<AniSkywalker>
Then, when chrome et al. request for /ipfs/something, it should be able to be intercepted.
<kythyria[m]>
... why would you stick a block device in IPFS anyway?
<MikeFair>
hehe; looks like I missed an F in that last entry :)
<muvlon>
kythyria[m], I think they're trying to stick ipfs into a block device, not vice versa
mguentner has quit [Read error: Connection reset by peer]
<MikeFair>
kythyria[m]: SO I can share a "disk volume" with other people over USB protocol
<MikeFair>
It's a usability thing "Want to encrypt and share a file hierarchy
mguentner has joined #ipfs
<MikeFair>
Basically Drop box;
<AniSkywalker>
^ that would be the way to do that
<AniSkywalker>
I floated that idea some weeks ago and now I'm working on Liftbox
<AniSkywalker>
"Dropbox, except we don't drop your data."
<kythyria[m]>
... you do know that no modern filesystem implementation is safe for concurrent access to the underlying block device?
<MikeFair>
kythyria[m]: I know some FS authors that would likely differ on your opinion of that; and ipfs is safe for concurrency because it's essentially read only
AniSkywalker has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<kythyria[m]>
I know there's a few for SAN-type usage where block devices are shared, but ones designed so that two machines with no knowledge of each other can access the same block device?
<kythyria[m]>
If it's read-only, then yes.
AniSkywalker has joined #ipfs
<AniSkywalker>
You know what's not safe? Closing your laptop for three seconds.
<muvlon>
I still don't see how this would provide anything like dropbox to users
<MikeFair>
yes; but there are trade offs to every scheme ; it usually boils down to trading speed for consistency
<AniSkywalker>
(And expecting everything to be OK.)
<kythyria[m]>
MikeFair: I guess if it's a read-only "block device" and the driver removes it before switching to a different image.
<AniSkywalker>
muvlon What I'm doing with Liftbox is basically, user is entitled to a merkel tree with up to X low-level blocks.
<kythyria[m]>
AniSkywalker: ?
<AniSkywalker>
Each block is 256kb padded, so files get rounded up.
<kythyria[m]>
(to the "closing your laptop for three seconds" remark)
<AniSkywalker>
Oh my mac puts everything to sleep, screwing up docker builds, etc
<AniSkywalker>
That was in RE: ... you do know that no modern filesystem implementation is safe for concurrent access to the underlying block device?
<muvlon>
AniSkywalker, sorry again, I was asking about MikeFair's weird USB stick idea
arkimedes has quit [Quit: Leaving]
<MikeFair>
muvlon: You'd give your counterparties an ipns volume key; the virtual drive volume mounter would announce a new USB device, and use that key access your JSON DAG tree
arkimedes has joined #ipfs
<MikeFair>
muvlon: the data coming from the DAG would be an aactual mountable filesystem
<AniSkywalker>
Anyways, with Liftbox the client does just like Dropbox's client does, chunk files and request permission to upload them.
<AniSkywalker>
The backend sends multipart-upload permission tokens to the client.
<muvlon>
how is that in any way better than just giving them a key to a regular ipns name which contains a regular file hierarchy?
<muvlon>
seems like a lot of trouble (and overhead!) for no gain
<kythyria[m]>
AniSkywalker: OSX is so awful at dealing with suspends that stuff crashes?
<AniSkywalker>
No, just my configuration.
<AniSkywalker>
I've got to fix it.
<AniSkywalker>
I thought it'd be fun to play with energy saver.
<AniSkywalker>
Anyways, that's it for tonight. Till the morrow!
<MikeFair>
muvlon: because I can't encrypt and modify the whole volume easily
<muvlon>
MikeFair, I think modifying and encrypting stuff at a file-system level is much easier than doing it at a block level
<MikeFair>
muvlon: it is; why is why if you provide a USB block device; you can put whatever FS you want on it
<muvlon>
that seems very ass backwards
<muvlon>
if you send me a dropbox link, I can't reformat that dropbox to fat32, and I don't think anybody would want to
<MikeFair>
otherwise you're restricted to sharing encrypted files not filesystems
<kythyria[m]>
Why do you want to share an encrypted FS anyway?
<MikeFair>
and the same can be done with this
<kythyria[m]>
If you do, just add it the regular way and use your OS' loop mount facility to mount from /ipfs
<MikeFair>
kythyria[m]: Because i'd really rather not have the entire world have access to seeing all my data all the time
<kythyria[m]>
As opposed to having a layer that encrypts chunks before they get put into ipfs and decrypts them on removal?
<MikeFair>
kythyria[m]: (1) that's not working on windows yet; (2) I don't think the automatic IPNS republish on modification is working on any system yet and I'm not sure it's planned; and (3) reencrypting the data and copying it into the mount every time isn't practical
<MikeFair>
kythyria[m]: This doesn't encrypt the chunks; this puts the chunks "as-is" into IPFS
<MikeFair>
kythyria[m]: The Block Device layer aligns the data into the 256K chunks
<kythyria[m]>
by reporting really gigantic blocks to the OS?
<MikeFair>
no; by keeping track of what "blocks" belong to which "chunks" and updating the right chunk(s) when there's a change
<muvlon>
hmm
<kythyria[m]>
I'd expect a normal filesystem to not care about packing things into chunks though
<MikeFair>
So it's a 16k or 32k or whatevertheuserwants "blocksize"
<muvlon>
you also lose _all_ of the deduplication ipfs offers
apiarian_ has quit [Ping timeout: 260 seconds]
<MikeFair>
muvlon: because the links are part of the content key; there's not as much of that as could be going on
<MikeFair>
muvlon: I also somewhat disagree
<MikeFair>
muvlon: Backups of the disk drive would have a lot of dedup
<MikeFair>
it took me a while to see it; but let's say there's a data block of 256k of all 0s
<MikeFair>
I haven't made it yet; but there's cid for that
sharp_ has joined #ipfs
<MikeFair>
I'd imagined that every data file on the planet that had a 256k aligned block of 0s would share that same address
apiarian has joined #ipfs
<MikeFair>
(as part of its internal links)
<MikeFair>
but they don't
<muvlon>
this already happens in ipfs, even with unaligned blocks
<muvlon>
because of content-sensitive chunking
<MikeFair>
it's only if all the linked neighbors of that block are also the same that it will dedup
<muvlon>
however, all of that goes out the window with encrypted volumes
<muvlon>
there won't be a 256k block of 0s in your encrypted volume anywhere
<kythyria[m]>
I suppose that's intentional though. You don't leak information about which bits of the file are the same.
<MikeFair>
right; I'm not saying that every block device will encrypt
<muvlon>
to whom would you leak it?
AkhILman has quit [Ping timeout: 276 seconds]
<muvlon>
only to people you're sharing it with or who are sharing it with you
<MikeFair>
muvlon: to anyone who knew/wanted the volume key
<MikeFair>
For example; I want to put my keysafe store in ipfs
seharder has joined #ipfs
<muvlon>
then put in an encrypted json file that contains the actual hash leading to the keysafe
<muvlon>
unless somebody can decrypt the json file, they won't know the hash
<muvlon>
relying on people not guessing the SHA256 of your private data is fine
<MikeFair>
muvlon: (A) even if they know the hash the keystore file itself is encrypted; and (B) what you said; blown up to a bigger scale with blocks as file; is exactly the USB block device I just describe (using the DAG directly instead of via a JSON file)
<muvlon>
what does it have to do with USB btw
<MikeFair>
I don't have to teach key manager applications ipfs
arpu has quit [Ping timeout: 240 seconds]
<muvlon>
yes, if you expose a block device
<muvlon>
but exposing a _USB_ device? why
<kythyria[m]>
And a writeable one at that.
<muvlon>
that's just going needlessly low-level for no reason
<muvlon>
a USB mass-storage device is an even worse abstraction for ipfs than a regular block device, imo
<MikeFair>
software virtualization ; then you format it with an FS and every OS that can read a USB flash device can read your ipfs volume (once ipfs is installed)
<kythyria[m]>
Except that if you can install the custom loopback driver that'd need, you can install FUSE or the like
<kythyria[m]>
Unless you're putting this into a separate bit of hardware that's actually using USB to communicate.
<MikeFair>
muvlon: what's the difference between those two; a USB mass storage device is way easier to write than an iSCSI or SATA software virtualization and all the OS's already understand transient USB keys...
<MikeFair>
kythyria[m]: I am planning that yes
<kythyria[m]>
You... didn't say that before.
<kythyria[m]>
And I presume this is a single-writer affair.
<MikeFair>
kythyria[m]: yes; but the writer token "might" be passable
<MikeFair>
kythyria[m]: So you might be able to take out a lease on writing
<kythyria[m]>
Also, it looks lke USB MSC reuses the same command set as SATA
<MikeFair>
kythyria[m]: but whether I've done the actual hardware USB thing or not; a software block device is still useful
<kythyria[m]>
I'm having a hard time seeing why
<MikeFair>
think of any existing keysafe manager; it wants a file on your OS
<MikeFair>
A USB block device gives you that
<MikeFair>
It's virtually the same thing as FUSE with no FUSE support required
<kythyria[m]>
If you aren't doing the hardware thing, then if you can install a software block device you can install a custom filesystem.
<MikeFair>
(so I can use it on windows)
<kythyria[m]>
Does Windows even have support for loopback devices?
<MikeFair>
kythyria[m]: Right; which software block device model do you propose?
<MikeFair>
kythyria[m]: Yes
<kythyria[m]>
I don't propose a block device at all.
<kythyria[m]>
That's... actually surprising.
<MikeFair>
kythyria[m]: it's rarely used; it's even got symlinks and hardlinks
<MikeFair>
kythyria[m]: and mount points
<MikeFair>
At least since Windows 7 and later (I think Vista had some things but not all)
<kythyria[m]>
I know windows has symlinks, hardlinks, and mount points, I didn't know that it had block devices in a generally usable way.
<MikeFair>
well I'll just say that I know you can take a file and mount it like it block device; but I don't know the underlying mechanics
<kythyria[m]>
You can do that in Linux, too; I'm not sure it can be anything but a real file though
<MikeFair>
haven't gotten that far
<MikeFair>
So when you say "loopback device" I heard "expose a file as a block device"
<kythyria[m]>
Yes
arkimedes has quit [Ping timeout: 264 seconds]
<MikeFair>
I've only ever done in the context of ISO file mounting
<MikeFair>
and VHD
<kythyria[m]>
Apparently it's possible to implement arbitrary block devices in userspace on linux. IDK if the kernel-mode bits for that are shipped by default in any distro, or if windows can do it.
<MikeFair>
That _possible_ is called FUSE
<MikeFair>
There's a FUSE for windows project that's pretty advanced
<kythyria[m]>
FUSE isn't about block devices at all, though.
<kythyria[m]>
It's about filesystems
<MikeFair>
ok fair enough
<MikeFair>
Though you can treat a block device as a file system with a fixed number of files that are of a fixed size and a predetermined numeric file name
<MikeFair>
And most hard drives actually do that these days
<MikeFair>
They're treating more like huge numerically indexed hash tables; not actual positional addresses
<MikeFair>
s/hash/lookup
<MikeFair>
(the firmware on the drives have started doing fancy stuff like "I always see requests for blocks 4, 124, and 678 together; I'll just relocate that so I don't seek so much"
aquentson1 has quit [Ping timeout: 240 seconds]
<MikeFair>
kythyria[m]: With respect to the deduplication; installed software and operating system files would generate a large number of dedup opportunities even when wrapped in a file system
<MikeFair>
with encrypted volumes; all bets are off; but that's where I'm thinking a COW layer would make up the difference -- so the base device would be unencrypted but my "change volume" could be encrypted
<MikeFair>
So installed software wouldn't necessarily get encrypted
<MikeFair>
but for now; I'm just thinking "small volumes" where small is on the order of 32G
AkhILman has joined #ipfs
pfrazee has quit [Remote host closed the connection]
chriscool has joined #ipfs
BreatheTech[m] has joined #ipfs
muvlon has quit [Ping timeout: 245 seconds]
tmg has quit [Ping timeout: 260 seconds]
muvlon has joined #ipfs
ianopolous has quit [Ping timeout: 240 seconds]
pfrazee has joined #ipfs
ianopolous has joined #ipfs
pfrazee has quit [Ping timeout: 240 seconds]
mildred has quit [Ping timeout: 276 seconds]
cyanobacteria has quit [Ping timeout: 240 seconds]
mildred has joined #ipfs
matoro has quit [Remote host closed the connection]
matoro has joined #ipfs
dignifiedquire has joined #ipfs
[1]MikeFair has joined #ipfs
MikeFair has quit [Disconnected by services]
[1]MikeFair is now known as MikeFair
gmoro has quit [Remote host closed the connection]
betei2 has joined #ipfs
mildred has quit [Ping timeout: 268 seconds]
betei2 has quit [Quit: leaving]
tmg has joined #ipfs
bwn has quit [Ping timeout: 245 seconds]
ShalokShalom_ has joined #ipfs
ShalokShalom has quit [Read error: Connection reset by peer]
robattila256 has quit [Ping timeout: 240 seconds]
bwn has joined #ipfs
tmg has quit [Ping timeout: 276 seconds]
Guest12217 has quit [Quit: Three sheets to the wind]
Aranjedeath has joined #ipfs
arpu has joined #ipfs
Caterpillar has joined #ipfs
ShalokShalom_ is now known as ShalokShalom
ylp has joined #ipfs
mildred4 has joined #ipfs
betei2 has joined #ipfs
mildred has joined #ipfs
dvim_ is now known as dvim
pfrazee has joined #ipfs
mildred4 has quit [Ping timeout: 240 seconds]
pfrazee has quit [Ping timeout: 240 seconds]
ecloud is now known as ecloud_wfh
s_kunk has quit [Ping timeout: 240 seconds]
ShalokShalom_ has joined #ipfs
Foxcool has joined #ipfs
mildred4 has joined #ipfs
ShalokShalom has quit [Ping timeout: 264 seconds]
G-Ray_ has joined #ipfs
rendar has joined #ipfs
wking has quit [Ping timeout: 240 seconds]
ribasushi has quit [Ping timeout: 240 seconds]
ianopolous has quit [Ping timeout: 245 seconds]
wking has joined #ipfs
s_kunk has joined #ipfs
s_kunk has joined #ipfs
s_kunk has quit [Changing host]
ribasushi has joined #ipfs
ulrichard has joined #ipfs
cemerick has quit [Ping timeout: 268 seconds]
suttonwilliamd__ has joined #ipfs
tmg has joined #ipfs
Aranjedeath has quit [Quit: Three sheets to the wind]
suttonwilliamd_ has quit [Ping timeout: 240 seconds]
gmoro has joined #ipfs
bastianilso has joined #ipfs
andoma has quit [Ping timeout: 240 seconds]
andoma has joined #ipfs
maxlath has joined #ipfs
betei2 has quit [Quit: leaving]
ulrichard has quit [Remote host closed the connection]
ZaZ has quit [Read error: Connection reset by peer]
tmg has quit [Ping timeout: 260 seconds]
rojulin[m] has left #ipfs ["User left"]
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 268 seconds]
Encrypt has quit [Quit: Quit]
bwerthmann has quit [Ping timeout: 240 seconds]
cemerick has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 240 seconds]
aquentson has joined #ipfs
aquentson1 has quit [Ping timeout: 245 seconds]
<cblgh>
whyrusleeping: just checking that i'm understanding; your zcash explorer doesn't embed an ipfs node right?
google77 has joined #ipfs
<google77>
hi, will IPFS make it harder to break net neutrality?
Boomerang has joined #ipfs
<MikeFair>
google77: depends on how you think they plan to break it
google77_ has joined #ipfs
google77 has quit [Ping timeout: 240 seconds]
<MikeFair>
google77: It will make it easier to store widely downloaded content (typically it's easiest to see it reducing the need for Content Delivery Networks / transforming them into Content Dedication? Nodes)
<MikeFair>
It can improve security by guaranteeing the data that's in the file you requested is actually the data you meant to get (using its address is verifying its content hash in the same action)
<MikeFair>
It can make harder/impossible to block the distribution of certain data (but that's not really net neutrality's concern)
<MikeFair>
And lastly; the part that most impacts net neutrality; is IPFS is designed to run over any deliver protocol; so attempts to block/thwart one protocol will just encourage users to use another (TCP over HTTP anyone?)
<MikeFair>
But the bottom line is "the net" currently controls the layer 0 delivery
<google77_>
cool
<MikeFair>
All your base belongz to zem
<google77_>
oh well
<MikeFair>
So if they say "you get 2MB/s" then you get 2MB.s
<google77_>
too bad
<MikeFair>
IPFS can help regulators thwart the reasons proposed for needing it
<MikeFair>
I completely agree that network operators can and should treat different data differently
<google77_>
I don't TBH.
<Stskeeps>
edge computing is also a funny issue, i.e. i would somewhat consider it fair to pay to stuff IPFS blocks on a nearby mobile tower for faster retrivial
<Stskeeps>
like i pay today to have AWS vms in certain regions, or S3
<MikeFair>
hehe - by being part of designing, installing, building, and managing an ISP network; my appreciation for the necessity altered
<MikeFair>
Not all links are created equal; and if you ask all links to perform equally ; you just have bad experiences all the way around
<google77_>
The internet shall ideally remain this wild west that it was until now.
<MikeFair>
For isntance; SMS technically violates net neutrality ; (the data is delivered as part of the beaconing protocol)
<MikeFair>
So tiny packets of small messages are delivered with preference over larger packets
<MikeFair>
What I see happening (mostly because I'm hoping to be part of leading the charge) is better wireless coverage in the 900Mhz or similar bands
<MikeFair>
Eliminate the need for the central operators
<google77_>
Wasn't a goal of IPFS to distribute the web where the problem was that the web became too centralised on specific services? I am talking about big IT corps ofc.
<MikeFair>
Another idea that could help with net neutrality is P2P "service bots"
<google77_>
*one of the problems
<MikeFair>
Yes; but the web isn't just files; it's a large part of it; it's also "automated agents"
<google77_>
I see.
<MikeFair>
I've got these problems with IPFS now; I can't actually host a website on it
<MikeFair>
I can serve files from a web address; but that's not the same thing as making a shopping cart with checkout abilities
<google77_>
Do you mean you can't host a dynamic website yet? I think that is still on to do, since IPFS is alpha.
<victorbjelkholm>
MikeFair: I think a better way to frame it is that it's very easy to deploy websites today but it's a bit harder to build web applications
<victorbjelkholm>
on IPFS
<google77_>
Because static websites work fine as much as I have seen.
<MikeFair>
So I'm now the crazy mad scientist saying "We need to give each other script execution privilege in addition to file storage privilege" which is crazy/powerful/dangerous take out the internet by a script kiddie if you flubbed it talk
<MikeFair>
victorbjelkholm: Yes; or web crawlers; or IRC ; or video conference hosts
<victorbjelkholm>
MikeFair: sounds like what you need is Ethereum :)
shizy has joined #ipfs
<victorbjelkholm>
MikeFair: something like IRC shouldn't be thaaat hard today, certainly not impossible. Take a look at Orbit for example
<MikeFair>
yeah; something "like" Ethereum's theoretical idea is welcomed ... not specifically ethereum though
<google77_>
I've been researching IPFS and all of those problems are in the pipeline of being addressed. Also isn't Javascript essentially script execution privileges in the browser?
<MikeFair>
victorbjelkholm: I'm one of the three guys known to have actaully had a real IRC like conversation on orbit (it wasn't the same) ;)
<MikeFair>
google77_: The browser is one centralized point in the system that can suck content through its I/O channel
<victorbjelkholm>
MikeFair: yeah, obviously Orbit is not perfect (yet :) ) but it demonstrates that with IPFS today, you could build something IRC-like at least
<MikeFair>
google77_: It's transient
<MikeFair>
victorbjelkholm: Oh TOTALLY!
<MikeFair>
And don't hear me wrong guys I'm totally pro-IPFS
<google77_>
no problem, constructive criticism is welcome
suttonwilliamd__ has quit [Ping timeout: 240 seconds]
<MikeFair>
I'm just trying to create some real applications that would encourage people to run ipfs daemon
<MikeFair>
s/just trying/am/
Foxcool has quit [Ping timeout: 268 seconds]
<MikeFair>
and the thing I noticed was I couldn't "host" snything in IPFS that could act like a smart agent or responder
Foxcool has joined #ipfs
<MikeFair>
It would need be something like "Add script to IPFS and get CID"; "Add CID to IPFS daemon launch list";
<MikeFair>
Now anyone could add that CID to their daemon
<MikeFair>
The more that do; that becomes a cluster for that service
<MikeFair>
Collective that "running cluster instance" has its own CID
<MikeFair>
(probably in IPNS)
<MikeFair>
When the nodeset of peers running that script CID changes; a new "peer list" is created and the IPNS entry changed
<google77_>
Well I am not a programmer, I am a network guy. So I can't help you there. But what I do know is that Benet did talk a lot about seeking API compatibility.
<MikeFair>
It likely requires some kind of consensus algorithm
HostFat has joined #ipfs
<MikeFair>
IPNS/IPFS and this service concept can be REST compatible
<MikeFair>
The new IPLD stuff is helping me a lot
HostFat_ has quit [Ping timeout: 260 seconds]
<MikeFair>
I can see how it can be integrated with the Unity game engine
<MikeFair>
embedded hardware IoT stuff
<MikeFair>
and the donations platform I'm building; but without a more formalized object concept and a consensus algorithm more modifying those objects in a transaction ; presenting a good / usable distributed multiuser application is challenging for me :)
<google77_>
Though, if IPFS succeeds there will be some paradigm shifts in web development, won't there?
<MikeFair>
s/is/when
<MikeFair>
err if
cemerick has quit [Ping timeout: 268 seconds]
<MikeFair>
google77_: A complete inversion
<google77_>
Probably.
HostFat_ has joined #ipfs
<google77_>
bye
cemerick has joined #ipfs
<MikeFair>
google77_: That's one of the first principles I mention to prepare "programmer types" when "thinking IPFS" -- this might break your brain a bit because you need to invert your logic
HostFat has quit [Ping timeout: 240 seconds]
<MikeFair>
google77_: CAS is not what we're used
<MikeFair>
to
Foxcool has quit [Ping timeout: 252 seconds]
<MikeFair>
Stskeeps: working with IPFS is like working "from the edge in"
<MikeFair>
Or "the middle" to "the other middle"
<MikeFair>
:)
<Stskeeps>
nod
<Stskeeps>
i personally see ipfs/ethereum/etc as a new 'digitally enabled human being' centric way of doing web-based 'apps'
<Stskeeps>
i.e. websites store into your digital self, not into their own servers
<Stskeeps>
and much more person to person (a-la proper sharing economy)
<MikeFair>
Ok; question; is there anyway we make IPFS nodes create registered "shortlinks" for hash keys; copying pasting them is really difficult in Window CMD ;)
<Stskeeps>
MikeFair: i've seen creative uses of emoji..
<Stskeeps>
:P
<MikeFair>
Stskeeps: agreed; and seeing how to finally solve the distributed search challenge by accepting the risk of remote execution that clinched me on the requirement
<MikeFair>
We can also model harder problems
suttonwilliamd__ has joined #ipfs
seharder has quit [Ping timeout: 240 seconds]
[1]MikeFair has joined #ipfs
MikeFair has quit [Disconnected by services]
[1]MikeFair is now known as MikeFair
ebel has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 268 seconds]
ashark has joined #ipfs
Foxcool has joined #ipfs
bastianilso has quit [Quit: bastianilso]
chaosdav has quit [Ping timeout: 258 seconds]
betei2 has quit [Quit: leaving]
<AphelionZ>
morning y'all
<MikeFair>
AphelionZ! o/ oi!
<MikeFair>
Interested in testing out some DAG based Pastebin?
locusf_ has joined #ipfs
pfrazee has joined #ipfs
<AphelionZ>
sure! do you have some code?
<AphelionZ>
i didnt see any forks or anything on my stuff
<AphelionZ>
MikeFair: whacha got?
<MikeFair>
No it was more just seeing what you should send to IPFS on each paste
<MikeFair>
It's two things
<AphelionZ>
oh ok cool, yeah lets talk about that
<AphelionZ>
also whyrusleeping I'd really love to test out the javascript version of the daemon if at all possible
<MikeFair>
The "node" which is the pastebin text; language; sessionId; basically captures the whole page at that point in time
<MikeFair>
And that's get you a CID
<AphelionZ>
C in CID being...
<MikeFair>
(ContentBasedAddress)
<MikeFair>
the hash
<AphelionZ>
ok im with you so far
<MikeFair>
Then, you have the other JSON object which is a list of times the "Submit" button was clicked and CID link for each of those moments
amosbird has joined #ipfs
<amosbird>
hello
<amosbird>
um, what are the common usages of ipfs?
<MikeFair>
hi there
<AphelionZ>
hey amosbird :)
<amosbird>
what can I get from it
<amosbird>
AphelionZ: hi
<amosbird>
Can I do file synchronization?
<amosbird>
prevent my files from being corrupted?
<MikeFair>
amosbird: think more like "Cloud Based Storage" that's actually cloud based
<MikeFair>
amosbird: You will get back exactly what you stored; and if you change the files you upload; those are stored as new things (so now you've got both things in the IPFS "cloud")
<r0kk3rz>
i find it better to think of it as transport rather than storage
<amosbird>
MikeFair: well, if I add a file to ipfs. will it be stored forever?
<amosbird>
even if my local disk fails
<r0kk3rz>
no
<amosbird>
hmm
cemerick has quit [Ping timeout: 268 seconds]
<MikeFair>
amosbird: With some caveats yes
<amosbird>
then why would I need this? version control?
cemerick has joined #ipfs
<MikeFair>
hehe
<AphelionZ>
amosbird: you want this so that when Google finally shuts down YouTube you don't lose all your videos
<MikeFair>
amosbird: If your files aren't accessed and the set of peers needs more storage space for stuff that is being accessed; your content could be purged out
<ebel>
sounds cool to be able to have a website hosted on IPFS and easily accesible to regular people! Whoever coded that up is cool.
<r0kk3rz>
amosbird: its a mechanism for allowing you to get content from whoever happens to have it, rather than the original source
<MikeFair>
amosbird: So there is a way to ensure you can do file recovery; you just need two nodes to do what is called "pinning" your backups
<MikeFair>
amosbird: That will guarantee those files will always be available in the network
<ebel>
re: ipns, is it possible to have multiple names?
<MikeFair>
ebel: Yes; via multiple keys
pcre has joined #ipfs
<MikeFair>
ebel: ipfs key
<amosbird>
MikeFair: sounds like a file sharing stuff
<amosbird>
with fs api
<amosbird>
can I setup a private ipfs network?
<r0kk3rz>
amosbird: its very similar to bittorrent in many ways, just more generalised
<MikeFair>
amosbird: that's it's strongest aspect; but like r0kk3rz says; it's really a chatting set of nodes that can do some cool things as it gets smarter
<ebel>
MikeFair: ah ok. I'll have to look into that.
<MikeFair>
ebel: I have three atm; my local daemon; and two websites
<amosbird>
MikeFair: cool, where can I find examples for setting up a ipfs network?
<MikeFair>
ebel: I generated a key for each domain and named it for the domain
<ebel>
how do you "ipfs name publish" then? Docs don't have any mention of that
<MikeFair>
ebel: I then use ipfs name publish -k keyname /ipfs/contentaddress to publish data to that address
<ebel>
MikeFair: Gotcha. The old hidden -k option, eh? :)
<MikeFair>
ebel: I'm on windows and had to get the latest preRelease to get it ;)
<r0kk3rz>
amosbird: just run up some nodes, they're automatically 'on the network'
<amosbird>
r0kk3rz: isn't that be a public network?
<ebel>
MikeFair: :) I might want a little bit and play around with IPFS first before running pre-released code :)
<r0kk3rz>
amosbird: yes of course
Foul is now known as Nycatelos
<MikeFair>
amosbird: I'm not sure about making a "private" network; you'd like have to dig into more configs; but joing the public network is as simple as downloading the ipfs client and running it
<MikeFair>
ebel: In this case it's so far been pretty stable
<MikeFair>
ebel: and you just can't have the -k feature without it :)
wak-work has quit [Ping timeout: 245 seconds]
<r0kk3rz>
you change the discovery node addresses in the config, and hope nobody connects to you :D
<MikeFair>
ebel: i've been using it pretty regularly
<MikeFair>
amosbird: Well you're not really "publishing" anything
<MikeFair>
amosbird: I put a couple zipped up and password protected files
<MikeFair>
amosbird: I didn't do anything special with them and my node eventually cleared its storage of them
<MikeFair>
amosbird: and since no one else requested them; they weren't anywhere else
<amosbird>
MikeFair: hmm
<MikeFair>
amosbird: Someone has to ask for something that looks like this: QmZPuApbbZU875DN4M1gUtgEDCAUL2gzcEGFtaioHeFwCws51V
<MikeFair>
it doesn't happen by accident and can't be "searched" for
<amosbird>
MikeFair: so that is the only key provided by ipfs
<amosbird>
is there only one ipfs network?
Foxcool has quit [Ping timeout: 240 seconds]
<amosbird>
and we are all connected?
<MikeFair>
amosbird: no, but its the only default out of the box one
<MikeFair>
amosbird: yes
<MikeFair>
amosbird: So if you give me an address; I can access it
<amosbird>
MikeFair: cool
<amosbird>
let me test it
maciejh has joined #ipfs
<amosbird>
so I just give you a hash?
<MikeFair>
amosbird: yep
<r0kk3rz>
amosbird: its distributed, best not to think of it as one cohesive thing
<amosbird>
hmm, is this private url thing really secure?
* MikeFair
reminds himself to use "stat" not "cat" next time.
<amosbird>
717k
<r0kk3rz>
secure?
<MikeFair>
amosbird: ok; something in it really borked my terminal
<MikeFair>
amosbird: No one is going to guess the hash address and randomly testing them for something real is impractical
<amosbird>
MikeFair: hmm, you terminal is vunerable
aquentson1 has quit [Ping timeout: 260 seconds]
<MikeFair>
actually; people can compute the hash
<MikeFair>
but they need to have the data first
<amosbird>
good point
<MikeFair>
(in which case why do they need you?_
<amosbird>
ok, will ipfs reach 1.0 in this year?
<r0kk3rz>
amosbird: if you're worried about security, encrypt your stuff. dont rely upon not being able to guess the hash
<amosbird>
r0kk3rz: yeah, i'll do that :)
<MikeFair>
I'm looking into keeping files I care about encrypted just in case someone is watching for the addresses I request; but it's not because I really thing anyone is going to find my files; it's more because I'm paranoid that way
wak-work has joined #ipfs
<MikeFair>
and I sleep that much better. :)
konubinix has joined #ipfs
<MikeFair>
I'm now more excited about the streaming and pubsub stuff that's happening
<MikeFair>
and the database-esque storage addressing
<amosbird>
well I still cannot think of more usages other than file sharing .
<MikeFair>
IPFS via their IPLD project allows you to link JSON objects to other JSON objects in the network
<MikeFair>
amosbird: P2P distributed messaging (think of all that blockchain stuff)
Foxcool has joined #ipfs
<amosbird>
MikeFair: well, that involves broadcasting right?
<MikeFair>
amosbird: but that's via the top level ipfs commands ; it's more linking into the p2p api
<amosbird>
this is kind of passive file sharing
<MikeFair>
amosbird: Or p2p chatter
<MikeFair>
amosbird: The network is very active
<MikeFair>
amosbird: lots of chatter; the FS is very passive
<MikeFair>
amosbird: So this latest stuff is giving more API hooks into being part of chatter
<r0kk3rz>
amosbird: files are just data, you can have raw blocks with data in ipfs
<MikeFair>
amosbird: For example; like you said with passive; you can server a static files website directly from IPFS
ulrichard has quit [Remote host closed the connection]
<amosbird>
MikeFair: how can I get the hash list?
<MikeFair>
amosbird: You publish the directory recursively instead of a file
<amosbird>
if it's a server, why not just be a ftp server
<MikeFair>
that includes an index.html
<MikeFair>
amosbird: The goals are to repalce the requirement for hosted domains at "servers"
<MikeFair>
amosbird: You can host a dynamic server that anyone can access from your own computer; and others can participate in "helping you" host that service/application
<MikeFair>
amosbird: there's then a bridge to DNS that will get browsers to "talk to"/"link with your IPFS hosted service
<frood>
there's also nice things like a cryptographic guarantee that the block returned is the block you wanted
<MikeFair>
frood: I especially like that for distributed code development and versioned documents :)
<amosbird>
MikeFair: ok, but I think zerotier is more suitable for this
<MikeFair>
s/development/deployment
cyanobacteria has quit [Ping timeout: 276 seconds]
<frood>
and the ability to crawl state history
<amosbird>
i mean, for network access kind of things
<AphelionZ>
they dont really seem like the same thing... unless i'm mistaken
<MikeFair>
amosbird: SDN matters; but I think that'd be more underneath IPFS
<r0kk3rz>
you can use ipfs-corenet as a SDN
<MikeFair>
amosbird: It's a bit theoretical at this point; but an address in IPFS can represent a single "set of computer" that can give you a service; a "Service Address" where you can get the responses you need from the nearest available node
<amosbird>
I think sharing via ipfs.io is convenient
<MikeFair>
amosbird: so the point was that all nodes are equal access points to all the content; http://localhost:8080/ is the same as http://ipfs.io/
<ebel>
MikeFair: "could not resolve name"
<MikeFair>
Except for their physical location and properties (hardware/bandwidth) etc
<MikeFair>
ebel: but the ipfs.io page works for you right?
<amosbird>
MikeFair: what stuff do you use to setup this site. It's fun :)
<ebel>
MikeFair: yes. ipfs.io works
<MikeFair>
AdmiLTE
oed has joined #ipfs
<MikeFair>
hit index2.html and get your socks knocked off
<MikeFair>
err AdminLTE
<M-fabrixxm>
that's fun.. I'm working on a project using AdminLTE, I opened yout link and I was "how I get there... no wait.. that's not my tab..."
<MikeFair>
at the moment it's just a static thing
<ebel>
weirdly the localhost version still isn't running for me. presume ipfs.io has more connections to the swarm
Guest153546[m] has joined #ipfs
<MikeFair>
ebel: and I've asked for it from that location earlier so it started asking
<amosbird>
haha
<MikeFair>
does ipfs ls /ipfs/QmbyZ75a61PmCWxRUedrmZMavg2EpULvkf1hAXJzHYnr8b
<MikeFair>
return anything?
<ebel>
ah right
<MikeFair>
amosbird: And the point r0kk3rz and frood were making earlier is /ipfs/QmbyZ75a61PmCWxRUedrmZMavg2EpULvkf1hAXJzHYnr8b will forever return exactly what you see here
<MikeFair>
it _can't_ return anything else because of its design
<r0kk3rz>
well, aside from an astronomically unlikely multihash collision
<MikeFair>
well, parts can get purged out and forgotten
cemerick has quit [Ping timeout: 268 seconds]
<MikeFair>
r0kk3rz: While that's true as it exists; I think that can effectively eliminated by using two hash keys of half the size but different algos concatenated
Boomeran1 has joined #ipfs
<r0kk3rz>
MikeFair: hah, yeah its not that easy, infact that will probably make it worse
<MikeFair>
r0kk3rz: I just can't see two different algos colliding exactly the same way twice on the same content
<r0kk3rz>
as it is its extremely unlikely
<MikeFair>
Each algo will generate more collisions; but the combinations will be distinct
<MikeFair>
maybe not
<frood>
MikeFair: what your proposing has basically the same collision chance.
Boomerang has quit [Ping timeout: 245 seconds]
<r0kk3rz>
frood: at best yes, at worst more collision chance
<frood>
instead of a full collision, you need two half collisions. 6 of one, half-dozen of the other
robattila256 has joined #ipfs
<MikeFair>
frood: In the mathematical probabilities space yes; but I don't think it does on actual information
<frood>
even distributions are even.
<MikeFair>
frood: because of the way the algos are designed
<MikeFair>
frood: We're not mapping random data; we're mapping organized data/information and that changes the distributions
<r0kk3rz>
MikeFair: you'll just change the colliding information, you wont get less just different ones
<MikeFair>
r0kk3rz: I think, though I might be wrong but I don't think so, that I put the collisions more into the random zones and less into the organized info zone
<MikeFair>
but I'll just leave it as; if ipfs was / could do something about that; it would likely be that
<MikeFair>
not my proposal
<r0kk3rz>
MikeFair: huh? what makes you think there is a difference between organised info and random info when it comes to hash collisions?
<MikeFair>
r0kk3rz: because each algo is designed to produce wildly different characteristics for similar looking bitstreams
<MikeFair>
r0kk3rz: organized information tends to look the same; like the frequency of characters in a language
cemerick has joined #ipfs
<MikeFair>
r0kk3rz: that lack of entropy in the data causes the algos to create wildly disparate answers the closer the similiarities are
<frood>
MikeFair: I think you're misunderstanding that property of cryptographic hash functions.
<frood>
it's not that they're designed to de-correlate similar messages, it's that they're designed to decorrelate all messages
<r0kk3rz>
MikeFair: what im saying is, the hash function is already designed to minimise collisions, you dont need to screw with it :)
<MikeFair>
r0kk3rz: two algos designed with the same characteristics less so; it's like agreement between two sources
<MikeFair>
r0kk3rz: I could be misunderstanding; and you might could be right; and I agree ;)
ShalokShalom_ is now known as ShalokShalom
<ebel>
MikeFair: huh, that localhost query for your website, and the ipfs ls is still running...... that can't be right
<MikeFair>
ebel: Just ^c it; I've seen that happen
<MikeFair>
I'm not clear why
<ebel>
this is not reassuring for the reputation of IPFS :P
Boomeran1 is now known as Boomerang
<MikeFair>
frood: Their architecture is to minimize the risk a message has been altered; they do that maximizing the hash differences in small bit differences
<MikeFair>
frood: more specifically: maximize the detection of an altered message
<ebel>
you can't guarantee that a hash function will not have a collission. it's still possible. in that case, you just need to chance hash function.
Encrypt has joined #ipfs
cyanobacteria has joined #ipfs
<MikeFair>
ebel: correcthole; pigeon hole requires there will be collisions; and in cryptohashes they guarantee that two "messages" or data blocks could absolutely not be anything like each other
<MikeFair>
(that was wierd... keyboard acting funny)
wak-work has quit [Ping timeout: 268 seconds]
<MikeFair>
cryptohashes guarantees that two data blocks producing a colliding hash must look extremely different
<frood>
yes they do, but they do that by producing an even distribution regardless of inputs
<MikeFair>
Each algo takes a different approach to that; but they all have the same goal
<MikeFair>
frood: right; and what I'm saying is our "information" doesn't have a random distribution
<frood>
that shouldn't matter. if inputs bias output distributions you weaken pre-image resistence.
<MikeFair>
could be you're right; I'm thinking on the bleeding edge of my understanding of the distributions; but for example; it seems to me that if I take the same piece of data and hash it forward; then hash it backward I get two hashes with a random distribution collision
<MikeFair>
The likelihood of a collision on the forward and backward reading of its own data colliding with another file doing the same thing is less than that of a larger key
<MikeFair>
err hash
<MikeFair>
because we aren't taking a random distribution set of input data
<MikeFair>
if it was random; then forwards and backwards would be the same as just forwards ; because like you said; even is even
<MikeFair>
And I could be completely miscomprehending the whole thing ;
<MikeFair>
it could still be even because of the reduced hash space
<MikeFair>
OH!!! here's a description; using the forward and backward model; you are guaranteed that all data is a palindrome
<MikeFair>
That doesn't have the same distribution as random data
wak-work has joined #ipfs
<MikeFair>
But that's the concept I'm exploring around hash keys and collisions
<MikeFair>
For now we've got fixed size 256k blocks producing those insanely large hashes
<MikeFair>
so like you guys said; it ain't broke; and it ain't going to break
<MikeFair>
so thanks for letting me play :)
<MikeFair>
plus we've got multihash in the wings
<MikeFair>
AphelionZ: Still there?
<MikeFair>
ebel: Could you try the ipfs name resolve -r /ipns/daclubhouse.net again?
<ebel>
MikeFair: error coult not resolvenanme
<MikeFair>
nslookup -type=TXT daclubhouse.net
<MikeFair>
You should see a dnslink entry
<MikeFair>
ends in LEKa
<kythyria[m]>
Yup
mildred has joined #ipfs
<AphelionZ>
MikeFair: ya
<MikeFair>
kythyria[m]: Hey there! :) does ipfs ls /ipfs/QmbyZ75a61PmCWxRUedrmZMavg2EpULvkf1hAXJzHYnr8b
<MikeFair>
work for you
<MikeFair>
AphelionZ: I'm still hoping to see an Interplanetary Pastebin via a based DAG session object
<ebel>
MikeFair: yes I see a dnslink thing
<AphelionZ>
MikeFair: haha ok. I have a full time job.
<MikeFair>
ebel: ok; then it just be that content id
<AphelionZ>
I'll let you know.
<ebel>
weird
<MikeFair>
AphelionZ: I'm cloning your repo now
<AphelionZ>
cool, i just tagged 0.3.0 this morning
<AphelionZ>
so it should be in good shape
<MikeFair>
AphelionZ: and yeah I know I should fork it; but I'm really wishing GH would change that workflow
<AphelionZ>
i dont mind if you make aPR
<MikeFair>
well I'm going to have to successfully chang the code first
<MikeFair>
my JScript foo is weak
<MikeFair>
I'm trying to learn integrating AdminLTE with some kind of JavaScript code to dynamically make some of those object entries
<AphelionZ>
the trick will be the dag stuff since I dont think ipfs-js-api has functions for that yet /cc daviddias
mildred has quit [Ping timeout: 258 seconds]
<ebel>
MikeFair: still can't get that value. weird
<MikeFair>
AphelionZ: does it have the older object functions... nm I'll look ;)
ygrek has joined #ipfs
<MikeFair>
ebel: you did an ipfs init right
<MikeFair>
ebel: I mean you can get other values
<ebel>
yeah did init and am running daemon
<ebel>
have put things and got them back
<MikeFair>
ebel: what about ipfs ls /ipns/ipfs.io
<ebel>
yeah I can ls that. and put an image and see it on ipfs.pics
suttonwilliamd__ has quit [Ping timeout: 256 seconds]
ygrek has joined #ipfs
maciejh has quit [Ping timeout: 268 seconds]
MikeFair has quit [Ping timeout: 260 seconds]
Guest74936 is now known as FrankPetrilli
espadrine has quit [Ping timeout: 240 seconds]
HostFat__ has joined #ipfs
HostFat_ has quit [Ping timeout: 240 seconds]
ianopolous has joined #ipfs
ianopolous has quit [Ping timeout: 240 seconds]
jkilpatr_ has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 240 seconds]
Encrypt has joined #ipfs
jkilpatr has quit [Ping timeout: 260 seconds]
wallacoloo_____ has joined #ipfs
seharder has joined #ipfs
bastianilso has quit [Quit: bastianilso]
betei2 has joined #ipfs
dignifiedquire has quit [Quit: Connection closed for inactivity]
ianopolous has joined #ipfs
espadrine has joined #ipfs
<kevina>
whyrusleeping: are you aiming to get 0.4.6 out in the next couple of days?
maciejh has joined #ipfs
<whyrusleeping>
Yeah
<whyrusleeping>
as soon as i can
<whyrusleeping>
blocking on the migrations stuff and the enumerate children issue youre working on
<Mateon1>
kevina, whyrusleeping: So, is 0.4.6 just a quick bugfix patch? (like 0.4.4)
<whyrusleeping>
Mateon1: not really, it actually contains some good improvements
<whyrusleeping>
i'm pushing it out a *little* faster than i would otherwise because of an issue with migrations on docker images
<Mateon1>
That reminds me that I need to update my nodes
<whyrusleeping>
but otherwise i'm going to be pushing for a release every two to three weeks
<whyrusleeping>
none of this four month long release cycle stuff
<Mateon1>
Cool, might not need to run master on all my nodes now :P
<Mateon1>
By the way, what was the commit from which 0.4.5 release was compiled from? Or is that just the head of the release branch?
PrinceOfPeeves has joined #ipfs
ShalokShalom has quit [Quit: No Ping reply in 180 seconds.]
ShalokShalom has joined #ipfs
<kevina>
whyrusleeping: all right I am starting to work on the fix now
<Mateon1>
Oh, that's an odd bug. I broke wget because of some dynamic linking stuff, and make failed because the wget binary exists, but exits with error, and bin/dist_get doesn't fall back to the next available utility (curl, fetch)
<whyrusleeping>
Mateon1: oh weird... because 'wget' exists it selects that tool and moves on?
<whyrusleeping>
kevina: cool, thank you!
<Mateon1>
Yep, quite an easy fix, but the "download complete" message will have to be duplicated
SuprDewd has joined #ipfs
<Mateon1>
I forget where I have my development go-ipfs repo...
ianopolous has quit [Remote host closed the connection]
wallacoloo_____ has quit [Quit: wallacoloo_____]
rendar has quit [Ping timeout: 240 seconds]
ShalokShalom has quit [Quit: No Ping reply in 180 seconds.]
wkennington has joined #ipfs
atrapado_ has quit [Quit: Leaving]
ShalokShalom has joined #ipfs
tilgovi_ has joined #ipfs
_mak_ has quit [Quit: ..]
_mak has joined #ipfs
tilgovi_ is now known as tilgovi
tilgovi has quit [Remote host closed the connection]
<cblgh>
but if it's the domain provider it depends a lot, usually they have some kind of area where you can set A, MX, TXT records
john1 has quit [Ping timeout: 252 seconds]
<cblgh>
so you'd click the corresponding button and either choose to create a TXT record, or write "TXT" yourself in one box, and then input the record data itself in the other box (i.e. 'dnslink=/ipfs/$SITE_HASH' for ipfs purposes)
google77 has quit [Quit: leaving]
jkilpatr_ has quit [Ping timeout: 240 seconds]
<jbenet>
cblgh: i dont need "how to set DNS records" instructions. sorry i should've been more clear. i meant, how to set and use dnslink records
<jbenet>
dnslink DNS TXT records
Yatekii_ is now known as Yatekii
anter1174 has joined #ipfs
matoro has quit [Ping timeout: 240 seconds]
gmoro has joined #ipfs
<AphelionZ>
whyrusleeping: you around?
anter1174 has quit []
<whyrusleeping>
AphelionZ: technically yes, but i'm not going to be very responsive on irc
<AphelionZ>
whyrusleeping: np, quick question - can I play with the js daemon yet?
Boomerang has joined #ipfs
gmoro has quit [Ping timeout: 260 seconds]
network[m] has joined #ipfs
Boomerang has quit [Ping timeout: 260 seconds]
jkilpatr_ has joined #ipfs
Boomerang has joined #ipfs
ShalokShalom has quit [Quit: No Ping reply in 180 seconds.]
cemerick has quit [Ping timeout: 268 seconds]
<whyrusleeping>
AphelionZ: i think so
<AphelionZ>
Whyrusleeoing cool where can i find it?
ShalokShalom has joined #ipfs
cyanobacteria has joined #ipfs
<AphelionZ>
oops. whyrusleeping ^
slothbag has joined #ipfs
<whyrusleeping>
Not really sure, i havent played with the js stuff much
<AphelionZ>
dignifiedquire: will that work in the browser?
<dignifiedquire>
repo: github.com/ipfs/js-ipfs
<dignifiedquire>
there are examples for browser usage there
<AphelionZ>
nice, thank you!
<dignifiedquire>
you are welcome, have fun testing it and please file issues if things are not working
<AphelionZ>
if its running in the browser.... where does it store stuff?
<AphelionZ>
sorry... probably a profoundly dumb question
<AphelionZ>
but is it localstorage? or...
pfrazee has joined #ipfs
<AphelionZ>
like if I hosted this on a website and somebody visited, what happens?
<hsanjuan>
AniSkywalker: the informer don't have access to the consensus state. I don't see why it would need to, but if you can provide a compelling reason it could be done. It has RPC access
<hsanjuan>
AnySkywalker: I don't know what is that you call capacity. Disk capacity?
<AniSkywalker>
Yeah.
<AniSkywalker>
For smarter allocations.
<hsanjuan>
The informer can report whatever metric. Those metric are passed to the allocator. Cluster doesnt really care what the metric is and how the allocator decides
<AniSkywalker>
So I need to update the RPC protocol to add that, no?
<hsanjuan>
To add what?
<AniSkywalker>
A way for nodes to report disk capacity?
<hsanjuan>
no
<AniSkywalker>
Oh, does this run on each node?
<hsanjuan>
just implement GetMetric() which returns a Metric which holds the disk capacity
<hsanjuan>
yeast
<hsanjuan>
yes
<hsanjuan>
all components run in each node
<AniSkywalker>
So how does a comparison between nodes occur?
<AniSkywalker>
Or can it?
<hsanjuan>
the allocator receives metrics for all candidates peers
<hsanjuan>
and it has to decide between those
<AniSkywalker>
Ah, I see. So how is networking handled? Automagically?
Encrypt has quit [Quit: Quit]
<hsanjuan>
It should not cocern you when implementing a new Informer/PinAllocator. But the main Cluster component regularly does GetMetric() from the Informer and pushes it to another component which is the PeerMonitor (in the cluster leader). When a Pin Request comes in, it will get metrics which have been logged call the Allocate() method with those.
<ianopolous>
Is there a way to get the merkle links from a block without downloading it? I'm aware of ipfs refs, but that doesn't seem to work on blocks.
matoro has joined #ipfs
tilgovi has quit [Client Quit]
<AniSkywalker>
hsanjuan is there any reason `Informer` is exported and there is a New method?
<AniSkywalker>
function*
<AniSkywalker>
I.e. not just New(*rpc.Client)?
tilgovi has joined #ipfs
pfrazee has quit [Ping timeout: 245 seconds]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
pfrazee has joined #ipfs
ashark has quit [Ping timeout: 255 seconds]
pfrazee has quit [Ping timeout: 240 seconds]
cyanobacteria has quit [Ping timeout: 245 seconds]
<daviddias>
dignifiedquire: thank you :)
<daviddias>
AphelionZ: sorry, I was out, I see that dignifiedquire answered your Q :)
pfrazee has joined #ipfs
tilgovi has quit [Ping timeout: 252 seconds]
matoro has quit [Ping timeout: 240 seconds]
matoro has joined #ipfs
<AphelionZ>
daviddias: thanks anyway :)
<hsanjuan>
AniSkywalker: if Informer is not exported, then I'm not sure you can call its methods from other module (at least they don't show in godoc)... Also, components are passed to Cluster on NewCluster(...) but it is the cluster which sets the rpc.Client later. That's why they have to satisfy this SetClient() method.
<AniSkywalker>
You're free to call it's methods anywhere if you get an instance of it.
<AniSkywalker>
Also, is the RPC a line between the IPFS cluster node and the IPFS node itself/
Mizzu has quit [Ping timeout: 268 seconds]
<AniSkywalker>
hsanjuan where do I need to implement the receiver of the RPC call? Or do I?
pfrazee has quit [Ping timeout: 240 seconds]
pfrazee has joined #ipfs
<hsanjuan>
the case.
<hsanjuan>
AniSkywalker: if the object is not internal to the module if should be exported. Informer is exported and its methods documented accordingly in godoc. Otherwise it's very confusing to know that it actually implements the interface, unless you return the interface type directly. But that's only good when you provide a default implementation and this is not
<hsanjuan>
the RPC is the way Cluster components can talk to other components. In particular it is the way a component can talk to the IPFSConnector component which is the line to the IPFS node.
<AniSkywalker>
What parts do you need to document? I would think a package dedicated to informers would be apropriate.
<AniSkywalker>
*i can't spell right now
<AniSkywalker>
But anyways, where do I tell IPFS how to respond to the request?
<hsanjuan>
You need to document what GetMetric() does for example
betei2 has quit [Quit: leaving]
<hsanjuan>
you don't receive RPC requests... you just have the possiblity to make them
<AniSkywalker>
hsanjuan Why? Why not just document what the receiver does?
<AniSkywalker>
Also, right, but where do I go to tell IPFS how to answer them so that it works?
<hsanjuan>
AniSkywalker: it looks better and it is standard to document methods separately imho.
<AniSkywalker>
Well you're really offering basic metrics.
<AniSkywalker>
So it makes sense to have metric.Capacity(*rpc.Client)
<hsanjuan>
yes, then offer basic standard documentation
<AniSkywalker>
And document that to say "Returns an informer that does X," since nobody should ever care about the actual nitty gritty.
<hsanjuan>
no, you just have to implement GetMetric() which returns an api.Metric object
<hsanjuan>
you can even ignore the SetClient(rpc) and leave it empty
<hsanjuan>
look at the numpin informer code
<AniSkywalker>
hsanjuan anyways, you're saying I don't need to actually implement the receiver?
<hsanjuan>
you just need to implement the Informer interface as the numpin informer does
<hsanjuan>
and then when that is done you can implement the PinAllocator as the "numpinalloc" module does
<AniSkywalker>
So who implements, for example, "IPFSPinLs"?
<AniSkywalker>
So I need to implement something there to get the capacity?
<hsanjuan>
For a naive version of a disk space informer you don't need to use the RPC. From the informer you just have to figure out how to get the free space from the filesystem
<hsanjuan>
start with such thing and then you can iterate
<hsanjuan>
I have to get going. Leave comments on the issue as I won't be able to keep tabs on irc in the next few days. good luck!
MikeFair has joined #ipfs
pfrazee has joined #ipfs
henriquev has quit [Quit: Connection closed for inactivity]
<jbenet>
whyrusleeping daviddias: see https://github.com/jbenet/asciinema-selfhost/issues/2 for self-hosting asciinema and ipfs. note that asciinema itself can now record offline (locally) and the player can be embedded easily. the asciinema readme even mentions ipfs :) <3