Guest34258 has quit [Killed (rajaniemi.freenode.net (Nickname regained by services))]
infinity0_ is now known as infinity0
infinity0 has quit [Changing host]
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
arpu has quit [Ping timeout: 248 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
kerozene has quit [Ping timeout: 264 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
gigq has quit [Ping timeout: 252 seconds]
infinity0 has quit [Remote host closed the connection]
kerozene has joined #ipfs
infinity0 has joined #ipfs
gigq has joined #ipfs
infinity0 has quit [Remote host closed the connection]
reit has quit [Ping timeout: 245 seconds]
<whyrusleeping>
lgierth: hrm...
<whyrusleeping>
how is it added in?
<lgierth>
patch link-add
<lgierth>
and then bubble the new hashes up to the root
arpu has joined #ipfs
<lgierth>
i thought the dist build wasn't touching existing things at all
<whyrusleeping>
its not touching existing things, but its probably because i've never synced my build machines cache with your changes
arkimedes has joined #ipfs
<lgierth>
ah!
<lgierth>
ipfs get /ipns/dist.ipfs.io could be the base for the workdir
matoro has joined #ipfs
iso has joined #ipfs
john1 has quit [Ping timeout: 245 seconds]
dryajov has joined #ipfs
HostFat__ has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
HostFat_ has quit [Ping timeout: 276 seconds]
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
john1 has joined #ipfs
matoro has quit [Ping timeout: 264 seconds]
seagreen has quit [Ping timeout: 240 seconds]
robattila256 has quit [Ping timeout: 258 seconds]
seagreen has joined #ipfs
matoro has joined #ipfs
john1 has quit [Ping timeout: 240 seconds]
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
dryajov2 has joined #ipfs
shizy has joined #ipfs
dryajov has quit [Ping timeout: 255 seconds]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
kulelu88 has quit [Quit: Leaving]
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
shizy has quit [Ping timeout: 260 seconds]
wallacoloo____ has quit [Quit: wallacoloo____]
iso has quit [Quit: Page closed]
wrouesnel has quit [Remote host closed the connection]
Aranjedeath has joined #ipfs
wrouesnel has joined #ipfs
aquentson has joined #ipfs
cyanobacteria has joined #ipfs
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
aquentson1 has joined #ipfs
dryajov2 has quit [Ping timeout: 256 seconds]
djokeefe[m] has joined #ipfs
aquentson has quit [Ping timeout: 255 seconds]
dignifiedquire has quit [Quit: Connection closed for inactivity]
wrouesnel has quit [Remote host closed the connection]
wrouesnel has joined #ipfs
dryajov has joined #ipfs
chris613 has quit [Quit: Leaving.]
arkimedes has quit [Read error: Connection reset by peer]
MikeFair has joined #ipfs
jedahan has joined #ipfs
cemerick has quit [Ping timeout: 248 seconds]
M0--_[m] has joined #ipfs
cemerick has joined #ipfs
jedahan has quit [Ping timeout: 255 seconds]
mguentner has quit [Quit: WeeChat 1.6]
<MikeFair>
Hi all, question... Is there a decent way to get something resembling a directory structure I can copy into/out of on Windows?
mguentner has joined #ipfs
<MikeFair>
What I'd like to do is have some directory on the FS that I can "share" with a friend
<achin>
if you're thinking of something like dropbox, where you can just drag files into a folder, ipfs isn't there yet
<MikeFair>
achin: It doesn't need to be that formal; I thought about drag n drop into the web browser
<achin>
i don't *think* there's any drag-drop in the browser stuff, but i'm not sure. i haven't been keeping up-to-date on that side of things
<MikeFair>
achin: But I would like to be able to make the equivalent of a "zip archive" and then publish that if
<MikeFair>
err id
<achin>
but on the command line, it would be a fairly simple command: ipfs add -rw c:/temp/my_folder
Foxcool__ has joined #ipfs
dryajov1 has joined #ipfs
<achin>
ipfs will print out a bunch of progress data to the console, but at the end wouild be a single ID that you could share
arkimedes has joined #ipfs
dryajov1 has quit [Client Quit]
<MikeFair>
achin: but what happens when I change the directory? Do I readd everything?
<achin>
yep
<achin>
you'll then get a new ID that you'll also have to [re]share
<MikeFair>
I was hoping for an immutable id
<MikeFair>
I saw the ipns possibility but it didn't make perfect sense
<achin>
yeah, ipns is the solution here
<achin>
sorry to bail on you, but i'm heading to bed now
<achin>
hang around, though! many helpful people are in here
<MikeFair>
achin: btw -- check out localhost:5001/webui
<MikeFair>
achin: The "Files" tab has DragNDrop / Select ; and a listing of local files
<achin>
neat! i've not used the webui in a long time
<MikeFair>
achin: Thanks
mguentner2 has joined #ipfs
reit has joined #ipfs
mguentner has quit [Ping timeout: 255 seconds]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
f[x] has quit [Ping timeout: 240 seconds]
<deltab>
MikeFair: your node has its own key pair that it can use to sign things
<deltab>
anyone can use the public key of that pair to verify that a message came from your node
<MikeFair>
deltab: I'm thinking that for the moment the best way to start is to create a password protected zip archive of the files to transfer then email the file id and password
<MikeFair>
I'd really like to be able to request a delete
<deltab>
and the key doesn't change (unless you delete it)
<deltab>
ipns then lets you use the key to find a directory hash that your node has signed
<MikeFair>
deltab: Well what I haven't figured out how to do is make my node id look like a file directory hierarchy
<MikeFair>
and Windows doesn't seem to provide the ipns executable
<deltab>
it's all part of the ipfs executable
<MikeFair>
hmm
<deltab>
ipfs name
<MikeFair>
ahh
<deltab>
e.g. ipfs name publish /ipfs/QmatmE9msSfkKxoffpHwNLNKgwZG8eT9Bud6YoPab52vpy
<deltab>
then that directory would be available as /ipns/yournodeid
<MikeFair>
I really need an address book for these ids
<MikeFair>
Trying to copy/paste QmU3o9bvfenhTKhxUakbYrLDnZU7HezAVxPM6Ehjw9Xjqy into these commands is annoying
<deltab>
yeah
<MikeFair>
or I need an 'ipfs base <mynodeid>' command
<deltab>
if you have a dns name, you can use that
<MikeFair>
That puts the text into a .base file in the .ipfs folder and uses that if I haven't provided one on the command line
<deltab>
by adding a TXT record containing dnslink= and the ipfs/ipns path
<deltab>
ah, nice idea
<MikeFair>
I do have control over a few domains
<deltab>
or assign a variable $base
<MikeFair>
OH DUH!
<MikeFair>
is the proper pathname ipfs/<nodeide> ?
<deltab>
/ipns/<nodeid>
<MikeFair>
got it
<deltab>
and /ipns/<dnsname> to look up the TXT record
<MikeFair>
ok... hmm... I did 'ipfs name resolve' (without having published anything) to see what comes back -- I half expected to get my own nodeid but I got an id I don't recognize instead... do you know what's going on there?
<deltab>
I expect that's what you have published by default, probably the empty directory
<deltab>
try accessing it as /ipfs/<hash>
<MikeFair>
I did a get on it; it's an empty directory ;)
gigq has quit [Quit: leaving]
<MikeFair>
Ok I already have a TXT record for SPF email; what's the deal for ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
<deltab>
you can add another, though it may be safer (for SPF resolution) to give it its own name, e.g. ipfs.whatever
<MikeFair>
deltab: Oh that's way better; I was going to do something like that anyway
<MikeFair>
deltab: Give friends the ability to dynamically update portions of their DNS tree
gigq has joined #ipfs
* MikeFair
half wonders if <nodeid>.nodes.ipfs.io might be usable for that
<MikeFair>
I do wish "add" had a --lifetime option
<MikeFair>
I think it'd be great if there was a secondary namespace like "/ipts/" where nodes were expected to churn their storage and pinning wasn't available
<deltab>
note that /ipfs/QmU3o9bvfenhTKhxUakbYrLDnZU7HezAVxPM6Ehjw9Xjqy is the web UI
<deltab>
(so don't publish that thinking it's your files)
<MikeFair>
deltab: Yeah, I'm still trying to figure out what publish is doing technically; My nodeid is /ipfs/QmfLTrULoFMsdzeRqe6wX7pAaaAPrmLPbXuxuqYfFpbxDM
ygrek has joined #ipfs
<MikeFair>
deltab: But I'm thinking of that as like my "peer id" -- the address of the think that's chatting on my behalf, participating in the swarm, etc, not a directory of files (other than the blocks it "has").... However the thing I want to "publish" is something else
<MikeFair>
s/think/thing
<deltab>
yes, it has both roles
pfrazee has quit [Remote host closed the connection]
<deltab>
aiui, it's given one prefix when used in the dht as a key for the node's tcp address, and another when identifying a hash signature
<MikeFair>
Yeah, I'm hoping to avoid using it as much as possible though because I like the thought that it's tied to this machine
<MikeFair>
So my phone, other computers, etc will all have their own peer ids; and that nodeid represents the stuff stored in the repo on that machine
* deltab
nods
<MikeFair>
So I don't like the idea of saying "Hi friends, here's the "nodeid/file/path" of those pictures we took last week"
<deltab>
you should be able to use the dnslink there instead of the node id
<MikeFair>
where nodeid changes depending on the machine; I'm thinking more like I have some other nodeid that I can import into each of those machines to work on a common archive
<deltab>
yes, they just have to have the key pair, I think
<MikeFair>
I'm also really interested in using ipfs for databases; both document (JSON objects) and SQL (mainly something like distributed SQLite)
<deltab>
yep, people are working on those too: see ipld and orbit
<MikeFair>
ok; for the moment; I'm a hard time seeing something better than a password encrypted zip archive for sharing a directory of files that I'd rather not have sit around forever
<deltab>
yes, I think that's your best option for the moment
<MikeFair>
Or maybe I just use webtorrent for that stuff and ipfs for stuff I want more widely/publically distributed
<deltab>
there's work being done on private networking too, so that you can restrict who can connect and transfer stuff
hen_ has joined #ipfs
<MikeFair>
deltab: That's cool -- yeah I always thought that somehow if I combined my nodeid and another nodeid that "longpairid" makes a "common space" that can be securely encrypted
<MikeFair>
Funny, I actually made a database design that worked on that id; it was encryption; just chained namespaces where ids in one space were mapped into ids in another (it was a way to eliminate many intermediate NAT networks as a problem)
<MikeFair>
err it wasn't encryption
hen_ has quit [Client Quit]
* deltab
nods
<MikeFair>
All you had to do was transmit/agree on the common label names that would be used; but locally you'd have different numeric addresses that would get translated as it hopped from network to network
<MikeFair>
(that made it so you could work with small 8/16 bit ids in huge label namespae
<MikeFair>
So is IPLD something kind of equivalent to a distributed JSON document with hard links
<deltab>
yep
<MikeFair>
Sweet! (And I'm loving that first paragraph of cjdns -- and yeah; while I wasn't willing to claim such an ambitious statement; it does sound similar)
<MikeFair>
The idea is providing a choice to store/transmit the actual/original data the seed and data length information required to generate the data; it's a different take on storing data blocks -- we don't have to actaully store the actual block; we only need to store instructions for recreating it (like a DNA sequence of sorts)
<deltab>
in other words, compression
<deltab>
it's a matter of choosing which originals to compress and how to encode them more efficiently
wallacoloo____ has joined #ipfs
<MikeFair>
deltab: Yes; though officially I think of compression as being some kind of encoding of the original; I was thinking more like a "if you use this PRNG, starting at this point, and executing this length; you get the data back"
infinity0 has joined #ipfs
<MikeFair>
the concept actually began as "Really Bad Encryption Encoding" -- the idea of good encryption is the attacker can't figure out the original message from just the signature or ciphertext; the concept of this was the exact opposite -- the signature of the data was a form of instructions/guidance on how to know if your guesstext would recreate the signature
<MikeFair>
So it was "this many bytes, using this many 1 values, with this many segments of contiguous 1 values, having this checksum, and this md5sum, ..."
<deltab>
the pigeonhole principle still applies, of course
<deltab>
if the instructions take up 32 bits, for instance, there's a maximum of 2³² possible corresponding messages
<MikeFair>
Right, I wasn't sure how, but it was really hard for me to see how say a SHA-1 checksum gets fooled with the same size message and the same number of 1s
<MikeFair>
It was almost like certain elements of the signature were multiplying the domain of the signature space
<MikeFair>
Like if I said "1024 bits, 128 1 bits, split into 68 sections of 1s, with the SHA checksum"
<MikeFair>
I can encode that into 64 bit
<MikeFair>
(32 bits for the first few pices, 32 bits checksum)
<MikeFair>
I totally agree pigeon hole HAS TO apply somehow, but I just couldn't see it in any obvious way
<deltab>
what about the 0s?
<MikeFair>
they are things you didn't mark 1
<MikeFair>
so it's implied with 128 1 bits
<MikeFair>
in 68 sections
<MikeFair>
I don't tell you where they are
<deltab>
among 1024?
<MikeFair>
yes
<MikeFair>
You have to figure out where they are
<deltab>
a nonogram
<MikeFair>
The checksums were then helping to guide you in the right direction; I was going to have one "simple modular sum" (like an adler32) and one more cryptographic sum; the first to guide you toward the right direction (to eliminate guesses) and the second to help you determine if you found the right one
<MikeFair>
exactly
<MikeFair>
The signature of the message is nonogram for the original text
<MikeFair>
psuedocode
<MikeFair>
err potentially even psuedocode for some mythical modules based PRNG
<MikeFair>
Though it's definitely not "psuedo-random" it's deterministic
Akaibu has quit [Quit: Connection closed for inactivity]
<MikeFair>
I decided to think of the binary image as a "piece of graphic art" rather than a mathematical number and see what I might come up with if I was trying to compactly explain to someone how to recreate that image; or be able to guide them to manipulate their own painting to match the original
<MikeFair>
For instance, the first bit would be whether or not the image is inverted (so the signature would always be encoded the version that had fewer 1 values)
<MikeFair>
Rotate the image N bits so that the first occurence of the longest run of 1s is at the front
<MikeFair>
stuff like that
<MikeFair>
Anything that can be compactly expressed and eliminates large numbers of downstream possibilities
dryajov1 has joined #ipfs
matoro has quit [Ping timeout: 264 seconds]
ecloud_ has joined #ipfs
Zaibon has quit [Ping timeout: 264 seconds]
bsm117532 has quit [Ping timeout: 264 seconds]
Zaibon has joined #ipfs
<MikeFair>
I'm definitely going to take some and read up on cjdns -- it looks very much like something I'd be interested in seeing used more
ecloud has quit [Ping timeout: 264 seconds]
krzysiekj has quit [Ping timeout: 264 seconds]
_mak has quit [Ping timeout: 264 seconds]
andoma has quit [Ping timeout: 264 seconds]
andoma has joined #ipfs
dryajov1 has quit [Client Quit]
_mak has joined #ipfs
matoro has joined #ipfs
vtomole has quit [Ping timeout: 260 seconds]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
krzysiekj has joined #ipfs
<MikeFair>
Hmm, IPLD seems to suffer the same fate as file directories; changing the content of a value, changes it location, which changes the link, which changes the entire document address....
<MikeFair>
And that recursively rolls back up the entire chain of linked pieces
arkimedes has quit [Ping timeout: 240 seconds]
<deltab>
right, but only the ancestors, so O(log n)
ulrichard has joined #ipfs
<MikeFair>
I'm thinking of an application that uses some kind of a database; the whole point is to let the user manipulate the values and give it some "consistent container name" (a file name) -- e.g. {"Project name": "The Great American Novel" by "Default Author"} (and hit save); which then changes to {"Project name": "The Great American Novel" by "Dakota Diamond"} (and hit save)
<deltab>
that's what ipns is for
<deltab>
a consistent name that can change what it refers to
<MikeFair>
But I wasn't thinking my accounting application is going to publishing ipns values
<deltab>
(or any other mechanism that can publish a hash)
<MikeFair>
Nor was I thinking the links inside the database would be ipns values
<deltab>
not inside, no
<deltab>
ipfs for those
<MikeFair>
for example, a contact manager; I have a "deltab" contact; it's your run-of-the-mill JSON object
cemerick has quit [Ping timeout: 248 seconds]
<MikeFair>
And I thought a cool feature of a database like this would be to allow applications to put their own data inside that contact via links
<MikeFair>
So deltab/email/ and deltab/calendar/ would get me objects from those applications related to deltab
aquentson has joined #ipfs
<haad>
MikeFair: take a look at https://github.com/haadcode/orbit-db, that might work for your use case or at least give some ideas how to do dynamic content with IPFS
<MikeFair>
haad: Thanks, yeah, orbit-db has teh right descriptions in the first few paragraphs :)
<MikeFair>
It's not that I care so much that the addresses are changing; it's how does my application and end user deal with that without going insane :)
cemerick has joined #ipfs
<MikeFair>
It seems like my most my use cases are: "Look at this fixed predictably named point to access the latest version of the expected content" :)
<haad>
MikeFair: then orbit-db should work nicely for you. the "predictable name" is the name of the db. note though that currently orbit-db doesn't have any notion of "access control", meaning anyone can write to any db (==name). we're planning on adding support for signed db's where the writes are bound and limited by signing keys, which will give you the ability to control whose updates the db accepts.
<MikeFair>
haad: Interesting -- is the db one named point or can it be more fragmented than that? For instance, in SQLite you have the single file; but a change to a single table would likely change the entire database address
<MikeFair>
haad: Or like in a JSON document where you'd like to change the "author" "value"
<MikeFair>
I'm really interested in how you think access control to updates can be managed
<nicolagreco>
steven[m]:
<nicolagreco>
are you stebalien?
<MikeFair>
I was thinking that download readability can be controlled by encrypting the individual data blocks with a group archive key (or database key if you will); then if you haven't been given the decryption (because you're not part of the accepted group) you at least can't read the data
maciej_ has joined #ipfs
<haad>
MikeFair: yeah, something like that ^ for reads. access control to updates can be done on the underlying data structure level, basically signing each update with a key and on the receiving end verify the signature against a key.
dignifiedquire has joined #ipfs
<MikeFair>
So when I think of a "database" I think of two things; the "id" or "location" of the database that my application "opens"; and then inside is a bunch of structured content; tables/rows, keys/values, directories/files, etc
<MikeFair>
And my application needs to speak the right "protocol" to access/manipulate the entities inside the database
<MikeFair>
When orbit-db says "database" what is it thinking/describing?
ensrettet has joined #ipfs
<haad>
MikeFair: exactly that ^ :) a "database" is a set of data that can be manipulated, and which gets replicated to the p2p network automatically (for those who are listening) on every update.
<MikeFair>
haad: Well I'm not thinking that I can take my "SQLite" database file, stick it inside orbit-db and expect it to work for all the applications looking at their respective SQLite file
<MikeFair>
s
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
cemerick has quit [Ping timeout: 248 seconds]
<MikeFair>
For instance, I like CouchDBs approach to eventual consistency; keep everything, help the application layer discover if there are conflicts, let the application layer resolve it
<MikeFair>
I get it; each database is local, the peers replicate to each other; there's a protocol for figuring out what the entity ids are and what the latest revision of each entity id should be....
<haad>
MikeFair: ok, I see what you mean. so yeah, no, orbit-db doesn't have a relational database at all (so no SQL). as for the eventual consistency, orbit-db uses CRDTs to resolve the conflicts, so slightly different compared to couchdb that orbit-db will handle conflicts automatically (ie. not on the application layer).
aquentson has quit [Ping timeout: 248 seconds]
<MikeFair>
bummer
<haad>
yeah. what you describe above is correct. the "protocol" there is the CRDT layer, to decide what the latest revision for each "id" (==database) is.
aquentson has joined #ipfs
dryajov1 has joined #ipfs
<MikeFair>
I noticed you mentioning that an id/name is the same thing as a database; where I think there's names at two levels; there's the id/name of the database/datastore, and then within that there are names of other entities that can be retrieved (like the names/ids of JSON documents in tree structure like in CouchDB; or tables in a SQL database)
<MikeFair>
Are the names/ids the same thing as a database because everything is linked; so a node within a database is also a database in its own right?
dryajov1 has quit [Client Quit]
<haad>
MikeFair: actually, let me clarify the previous a bit: there's no "automatic conflict resolving" per se, but rather the CRDTS (*conflict free* replicated data types) handle the data merging in a way that there are no conflicts.
<MikeFair>
I'm just trying to picture (1) how is the database internally organized? what am I putting into this database? It sounds like it's a key/value tree (effectively some kind of a JSONesque Object structure); and (2) how would my application (Unity3d for example) access/update/get notified of updates with this database
<MikeFair>
haad: Ahh ok
<haad>
MikeFair: wrt the id, I see the confusion in our discussion here. orbit-db has several different types of data stores and within a key value store and document store, there are ids as you think of them (just like they would be in a traditional kv store or document store). I was refering to an id as the name of the database.
<MikeFair>
haad: So then I'm expecting that if two nodes tried to make the same update (i.e. both had independently updated the email address for 'haadcode') there's likely a way to detect that
<haad>
MikeFair: oh nice, are you working on a game? :)
<MikeFair>
haad: A distributed VR framework/platform; but using a game to make it really solve actual problems yeah
<haad>
MikeFair: well, at the end of the day, it'll boil down to how you model your data. for some use cases a kvstore (eg. "update key X") makes sense, for some you'd be better off using something like an event log (think: an ordered array of events).
<haad>
MikeFair: cool! I'd be keen to know more. I'm an ex-gamedev myself and we've been looking into how IPFS and its sister projects can provide distributed primitives and tools for VR :)
<MikeFair>
haad: For example, one ue case we have is letting you upload 360 images from panoramic 360 cameras; we also have a sort of "Operator's tablet" that can watch the same thing you are and put arrows in your view/turn your heading; and other stuff to help people giving demos see what you're seeing
<MikeFair>
haad: Cool!
<MikeFair>
haad: I've got lots of ideas on that front :)
anewuser has quit [Read error: Connection reset by peer]
<haad>
excellent, let's move to PM to discuss more details :)
anewuser has joined #ipfs
ylp1 has joined #ipfs
Caterpillar has joined #ipfs
yasuoscriptmaste is now known as vapid
rendar has joined #ipfs
Aranjedeath has quit [Quit: Three sheets to the wind]
Foxcool__ has quit [Ping timeout: 245 seconds]
aquentson1 has joined #ipfs
<haad>
MikeFair: thanks for the call, was good talking to you!
maciej_ has quit [Ping timeout: 258 seconds]
aquentson has quit [Ping timeout: 255 seconds]
robattila256 has joined #ipfs
maxlath has joined #ipfs
mildred4 has joined #ipfs
Foxcool__ has joined #ipfs
s_kunk has quit [Ping timeout: 252 seconds]
cyanobacteria has quit [Remote host closed the connection]
<hsanjuan>
Kubuxu: ipfs-cluster would benefit a lot from private networks.. has there been any progress on that front? I haven't followed much but I've seen some things were merged in libp2p?
espadrine has joined #ipfs
<Kubuxu>
There is PR to go-ipfs that works, it isn't in a state that I want it to be (UX wise)
<Kubuxu>
I think I have sprintino on it in a week time
<hsanjuan>
Kubuxu: cool, but this is mostly an go-libp2p feature right? is the libp2p part usable now?
<muvlon_>
sprintino tortellino
<muvlon_>
pls no forkerino
ensrette_ is now known as ensrettet
<Kubuxu>
It is usable
<Kubuxu>
I will need to change few interfaces to get the UX
<hsanjuan>
Kubuxu: its mostly as a way to secure inter-ipfs-cluster-peer comms
<hsanjuan>
Kubuxu: will wait for the sprintino!
linelet has quit [Ping timeout: 245 seconds]
ckwaldon has quit [Quit: ckwaldon]
aeternity has joined #ipfs
anewuser has quit [Read error: Connection reset by peer]
anewuser has joined #ipfs
aquentson has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 255 seconds]
cemerick has quit [Ping timeout: 248 seconds]
Boomerang has joined #ipfs
void9 has joined #ipfs
aquentson has joined #ipfs
chris613 has joined #ipfs
aquentson1 has quit [Ping timeout: 258 seconds]
Boomerang has quit [Ping timeout: 245 seconds]
shizy has joined #ipfs
Boomerang has joined #ipfs
ashark has joined #ipfs
muvlon_ has quit [Quit: Leaving]
apiarian has quit [Ping timeout: 240 seconds]
jkilpatr has quit [Ping timeout: 255 seconds]
apiarian has joined #ipfs
matoro has quit [Ping timeout: 256 seconds]
apiarian has quit [Ping timeout: 264 seconds]
apiarian has joined #ipfs
jkilpatr has joined #ipfs
pfrazee has joined #ipfs
matoro has joined #ipfs
mguentner2 is now known as mguentner
ashark has quit [Read error: Connection reset by peer]
ashark has joined #ipfs
fxrs_ has joined #ipfs
apiarian has quit [Ping timeout: 258 seconds]
matoro has quit [Ping timeout: 240 seconds]
apiarian has joined #ipfs
dryajov1 has joined #ipfs
aeternity has quit [Ping timeout: 245 seconds]
apiarian has quit [Ping timeout: 255 seconds]
Aranjedeath has joined #ipfs
ylp1 has left #ipfs [#ipfs]
jkilpatr has quit [Ping timeout: 256 seconds]
jkilpatr has joined #ipfs
Will__ has joined #ipfs
Boomerang has quit [Ping timeout: 258 seconds]
Boomerang has joined #ipfs
matoro has joined #ipfs
Boomerang has quit [Ping timeout: 240 seconds]
Boomerang has joined #ipfs
peterix has quit [Remote host closed the connection]
peterix has joined #ipfs
apiarian has joined #ipfs
cemerick has joined #ipfs
apiarian has quit [Ping timeout: 276 seconds]
tilgovi has joined #ipfs
s_kunk has quit [Ping timeout: 258 seconds]
<herzmeister[m]>
a refresh with `ipfs add -r` (few gigabytes) is currently taking a very long time, freezing from time to time 3-4 minutes, however then continuing eventually... what gives?
cemerick has quit [Ping timeout: 248 seconds]
herzmeister has joined #ipfs
<SchrodingersScat>
h-hashing?
herzmeister has quit [Client Quit]
G-Ray_ has quit [Quit: G-Ray_]
galois_dmz has joined #ipfs
bsm117532 has joined #ipfs
counterspying has joined #ipfs
arkangel has joined #ipfs
counterspying has quit [K-Lined]
ygrek has joined #ipfs
mildred1 is now known as mildred
Foxcool__ has quit [Ping timeout: 240 seconds]
wrouesnel has quit [Read error: Connection reset by peer]
maciejh has quit [Ping timeout: 240 seconds]
ygrek has quit [Ping timeout: 258 seconds]
Encrypt has joined #ipfs
cemerick has joined #ipfs
apiarian has joined #ipfs
bastianilso has quit [Ping timeout: 256 seconds]
matoro has quit [Ping timeout: 258 seconds]
jkilpatr_ has joined #ipfs
jkilpatr has quit [Ping timeout: 240 seconds]
sametsisartenep has quit [Quit: zzz]
se3000 has joined #ipfs
ygrek has joined #ipfs
kulelu88 has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 240 seconds]
matoro has joined #ipfs
<kulelu88>
Is there a quick and dirty example to setting up a static website on IPFS?
s_kunk has joined #ipfs
Encrypt has quit [Quit: Quit]
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
<alu>
create html and files inside a folder, open command line inside directory and type: ipfs add -r . and then take hash and use it with http://localhost:8080/ipfs/<hash>/ or https://ipfs.io/ipfs/<hash>/
<alu>
make sure ipfs daemon is on and you should have your static website working with IPFS.
<kulelu88>
alu: that's all?
apiarian has quit [Ping timeout: 260 seconds]
<alu>
Yeah pretty much that's it.
<alu>
Add IPFS to an environment variable so you can execute it from anywhere.
<kulelu88>
alu: and how do I make other people share the hosting of the static code?
<herzmeister[m]>
looks the ipfs daemon is grinding my system to a halt sometimes (which might have had to do with my previous problem)
<alu>
they can download a copy with ipfs get <hash>
apiarian has joined #ipfs
Oatmeal has joined #ipfs
matoro has quit [Ping timeout: 240 seconds]
espadrine has quit [Ping timeout: 240 seconds]
<SchrodingersScat>
kulelu88: and they can pin it
bastianilso has joined #ipfs
tilgovi has quit [Ping timeout: 255 seconds]
poga has quit [Ping timeout: 245 seconds]
sugarpuff has quit [Ping timeout: 248 seconds]
sugarpuff has joined #ipfs
poga has joined #ipfs
shizy has quit [Quit: WeeChat 1.7]
matoro has joined #ipfs
shizy has joined #ipfs
galois_dmz has quit [Remote host closed the connection]
<dryajov1>
which breaks a lot of the code that relies on Rand...
<dryajov1>
kumavis: I like your sugestion regarding polyfill the but not sure how safe that is and if its going to confuse other packages about the env they are in...
<dryajov1>
so... just want to run this by everybody before moving forward...
galois_dmz has joined #ipfs
<dryajov1>
my concrete questions are:
<dryajov1>
does anybody see a problem with the window -> self change? and if so, whats a better approach.
<dryajov1>
how do we deal with third party modules that need patching and without which we wont be able to merge the changes to master...
<dryajov1>
kumavis: I couldn't find any docs about that... thanks
matoro has quit [Ping timeout: 248 seconds]
espadrine has joined #ipfs
jkilpatr has joined #ipfs
<dignifiedquire>
kumavis: dryajov1 I don't think global is a real option, if it is not enabled by default for webpack and browserify. We want users to be able to bundle js-ipfs and libp2p with as little additional config as possible
jkilpatr_ has quit [Ping timeout: 264 seconds]
cblgh has quit [Quit: Lost terminal]
<dryajov1>
dignifiedquire: hmm... good point...
<dignifiedquire>
I think this is something that has to be checked on every dependency, which is why it's important to have the tests so we know if things are working or not
cblgh has joined #ipfs
<dryajov1>
dignifiedquire: well, AFAIK, for all intents and purposes `self` would be the more portable way, the checks for individual features are still there, the are just done with self rather than window, so I don't think from that stand point its any less reliable, but a lot more portable...
<dignifiedquire>
yes, but that is only applicable to those cases where we don't care if we are in a ww or not
cblgh has quit [Client Quit]
rodarmor has joined #ipfs
<kumavis>
its enabled by default for browserify
<dryajov1>
dignifiedquire: I see yeah, agreed from that standpoint, but those would be performed in a different way if you _do_ care about the environment, for the most part, and specifically from the stand point of libs like ipfs, I think it should try to be as env agnostic as possible, and check for the environment only in those cases that it needs to
<dryajov1>
dignifiedquired: to clarify... if you do care to make a distinction based on the environment, then you can still check for say `window`, but in most cases, you only care about the facilities you have available
<dryajov1>
IMHO?
cblgh has joined #ipfs
cblgh has joined #ipfs
cblgh has quit [Changing host]
cyanobacteria has joined #ipfs
Encrypt has joined #ipfs
Boomerang has quit [Ping timeout: 252 seconds]
G-Ray has joined #ipfs
rendar has quit [Ping timeout: 240 seconds]
apiarian has quit [Ping timeout: 276 seconds]
apiarian has joined #ipfs
<dignifiedquire>
yes
<dignifiedquire>
it just might be that if someone wrote window in their library, they might have thought about webworkers and were like, yes I want to make sure it doesn't work in a ww or they might have not thought about it
Boomerang has joined #ipfs
pfrazee has quit [Remote host closed the connection]
<dryajov1>
dignifiedquire: I see... yeah, I totally agree with that in general... but, a couple of points with regards to this change in general:
<dryajov1>
1) for IPFS, we should be able to do the change since we can always fix whats broken... so, unless I'm missing something, within that context is safe?
<G-Ray>
Is there a project on top of IPFS which aims to create a "distribute" hard disk ? (like maidsafe/storj attemps)
<dryajov1>
2) for third party libs, we can a) fix it and hope it gets merged (because we need support for WW) or we can change the lib for something else...
<dryajov1>
2) for third party libs, we can a) fix it and hope it gets merged (because we need support for WW) or B) we can change the lib for something else...
atrapado_ has quit [Quit: Leaving]
apiarian has quit [Ping timeout: 264 seconds]
rendar has joined #ipfs
rendar has joined #ipfs
rendar has quit [Changing host]
UgJkA has quit [Ping timeout: 256 seconds]
palkeo has joined #ipfs
Akaibu has joined #ipfs
tilgovi has joined #ipfs
Boomerang has quit [Quit: leaving]
gigq has quit [Ping timeout: 276 seconds]
maxlath has joined #ipfs
gigq has joined #ipfs
tilgovi has quit [Ping timeout: 245 seconds]
tmg has joined #ipfs
pfrazee has joined #ipfs
ashark has quit [Ping timeout: 240 seconds]
apiarian has joined #ipfs
<nicolagreco>
daviddias: does the js-ipld-resolver goes and retrieves content behind hashes for me or am I looking for something else?
UgJkA has joined #ipfs
arkangel has quit [Quit: Leaving]
shizy has quit [Ping timeout: 264 seconds]
cyanobacteria has quit [Quit: Konversation terminated!]
cyanobacteria has joined #ipfs
bastianilso has quit [Quit: bastianilso]
thefinn93 has quit [Quit: cya]
thefinn93 has joined #ipfs
Caterpillar2 has joined #ipfs
cyanobacteria has quit [Ping timeout: 255 seconds]
matoro has joined #ipfs
ashark has joined #ipfs
jkilpatr has quit [Ping timeout: 264 seconds]
terracing has joined #ipfs
shizy has joined #ipfs
Encrypt has quit [Quit: Quit]
dryajov has joined #ipfs
seagreen has joined #ipfs
ulrichard has joined #ipfs
terracing has quit [Read error: Connection timed out]
jkilpatr has joined #ipfs
kvda has joined #ipfs
MDude has quit [Ping timeout: 255 seconds]
wallacoloo____ has joined #ipfs
cemerick has quit [Ping timeout: 248 seconds]
john3 has joined #ipfs
john1 has quit [Ping timeout: 248 seconds]
ulrichard has quit [Remote host closed the connection]
OS-23499 has joined #ipfs
OS-23499 has left #ipfs ["WeeChat 1.7"]
aquentson1 has quit [Ping timeout: 240 seconds]
aquentson has joined #ipfs
Caterpillar2 has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 248 seconds]
maxlath has quit [Quit: maxlath]
tilgovi has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
ianopolous has joined #ipfs
tilgovi has quit [Ping timeout: 240 seconds]
Will__ has quit [Ping timeout: 260 seconds]
<daviddias>
nicolagreco: I think you are looking at the right thing
<daviddias>
the files that correspond to the hashes have to exist though, it won't materialise files for the hashes you pass it on by itself :)