<victorbjelkholm>
lgierth: I dunno, you should be received the threads as well as the posts... What browser are you using?
<lgierth>
ff 51.0.1 (64-bit)
<lgierth>
anyone seeing "here's another thread"?
<victorbjelkholm>
nope
<lgierth>
aaaah!
<lgierth>
just refreshed and now there's tons of stuff
<lgierth>
awesome
<victorbjelkholm>
great! :)
<lgierth>
even my ellllo is there
<victorbjelkholm>
guessing it'll be easier to discover more when there is more peers
<lgierth>
i replied on that now
<lgierth>
it'd also help if we'd analyze the properties of floodsub
<lgierth>
and seeing your reply there too
<lgierth>
cool
<lgierth>
awesome \o/
<lgierth>
observation: it eats a whole cpu core here
<lgierth>
let me profile
jedahan has joined #ipfs
espadrine has quit [Ping timeout: 240 seconds]
<victorbjelkholm>
Yeah, has zero optimization atm
<lgierth>
ok i'll let someone else do the profiling -- it's pretty hard in a single-process firefox that's already at full capacity by the page :)
<lgierth>
this is pretty cool stuff though
fleeky__ has joined #ipfs
fleeky_ has quit [Ping timeout: 255 seconds]
ygrek has quit [Ping timeout: 258 seconds]
mispaint has joined #ipfs
matoro has quit [Ping timeout: 245 seconds]
<victorbjelkholm>
thanks :)
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
Mitar has quit [Ping timeout: 240 seconds]
screensaver has quit [Ping timeout: 240 seconds]
screensaver has joined #ipfs
Mitar has joined #ipfs
tmg has joined #ipfs
palkeo has quit [Quit: Konversation terminated!]
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
pfrazee has joined #ipfs
slothbag has joined #ipfs
sametsisartenep has joined #ipfs
DiCE1904 has quit [Read error: Connection reset by peer]
jkilpatr has quit [Ping timeout: 255 seconds]
DiCE1904 has joined #ipfs
pfrazee has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
MikeFair has joined #ipfs
matoro has joined #ipfs
maxlath has quit [Quit: maxlath]
HostFat has joined #ipfs
ianopolous has quit [Ping timeout: 252 seconds]
<MikeFair>
o/
pfrazee has quit [Read error: Connection reset by peer]
basilgohar has quit [Ping timeout: 240 seconds]
akkad has quit [Ping timeout: 240 seconds]
Poefke has quit [Ping timeout: 256 seconds]
sametsisartenep has quit [Ping timeout: 255 seconds]
Mitar has quit [Ping timeout: 240 seconds]
basilgohar has joined #ipfs
sametsisartenep has joined #ipfs
Poefke has joined #ipfs
Mitar has joined #ipfs
pfrazee has joined #ipfs
arpu has quit [Remote host closed the connection]
Guest31438 has joined #ipfs
stevenaleach has joined #ipfs
arpu has joined #ipfs
akkad has joined #ipfs
soloojos has joined #ipfs
arpu has quit [Ping timeout: 245 seconds]
muvlon has joined #ipfs
cemerick has joined #ipfs
cemerick has quit [Ping timeout: 255 seconds]
pfrazee has quit [Read error: Connection reset by peer]
cemerick has joined #ipfs
arpu has joined #ipfs
DiCE1904_ has joined #ipfs
DiCE1904 has quit [Ping timeout: 252 seconds]
<MikeFair>
Hey guys; curious; any reason not to support using DNSKEY as the hash source for ipns records
<MikeFair>
(Or DNSSEC more generally)
<MikeFair>
This would be in addition to DNSLINK
wallacoloo_____ has joined #ipfs
<Kubuxu>
We would probably do that if: someone wrote implementations in go (and js), got them vetted and DNSSEC wasn't more centralized system than CAs are
HostFat_ has joined #ipfs
<Kubuxu>
MikeFair: if you are interested in DNSSEC, DNSCurve would probably interest you too
HostFat has quit [Ping timeout: 245 seconds]
apiarian has quit [Ping timeout: 276 seconds]
apiarian has joined #ipfs
pfrazee has joined #ipfs
muvlon has quit [Quit: Leaving]
<MikeFair>
Kubuxu: I agree; I was just thinking of using that data for the ipns HASH
<MikeFair>
(like I said; in addition to)
akkad has quit [Ping timeout: 260 seconds]
<AphelionZ>
ok, for my next trick... how does versioning work? via object / dag links?
<MikeFair>
IPFS is already hitting a DNS TXT record; so the concept of enabling DNS names has laready been broached; I'm just trying to link it in with the PKI that DNS is already working with to manage its own information
<AphelionZ>
I want to try and work versioning into my pastebin next
<MikeFair>
AphelionZ: Each post has a unique HASH
<MikeFair>
AphelionZ: That HASH is the version id
<AphelionZ>
right but are the versions linked to each other at all under the hood? is that possible?
<AphelionZ>
like a linked list of sorts
<MikeFair>
AphelionZ: no; because the content is the address; yes it's possible
<AphelionZ>
yes, i understand the content-addressable bit
<MikeFair>
well; not exactly
<AphelionZ>
I just want to create a UI that's like "previous version / next version"
<AphelionZ>
theoretically I could use ipns to point to my "database" of versions
<AphelionZ>
but I like the idea of linking the objects themselves
<MikeFair>
it's not possible for the DAG to figure it out on its own; but you can explicitly create a structure to explain it / tell the system how to link them
jedahan has joined #ipfs
<AphelionZ>
the latter part of what you said - is anybody doing that yet? I want to be consistent / compliant if possible
<MikeFair>
IPLD
<MikeFair>
with IPNS at the root
<AphelionZ>
sounds like what I'm after
<MikeFair>
(I think you can do it from the base to)
<AphelionZ>
what's interesting about ipfs is that there's a theoretical state where every single content hash is stored and the system will be able to piece together anything by just grabbing blocks that match, even if they were never intended for the target file
<MikeFair>
The easiest way to get started for a single sessions pastebin history is using a JSON object with all the versions for that session
<AphelionZ>
MikeFair: I think I can already do one better based on some of the uhh... emergent properties that are already showing up on my app
<MikeFair>
AphelionZ: Yeah; you have a timeandspace constraint to actually do that physical storage in practice; but yes
<MikeFair>
AphelionZ: LibraryOfBabel.info
<AphelionZ>
haha yes
<MikeFair>
AphelionZ: This is another approach to the same thing
<AphelionZ>
yeah you'd have to make more atoms in the universe
<AphelionZ>
whoa what is this
<MikeFair>
AphelionZ: Rather than actually store it; you have an addressing scheme
<MikeFair>
Go to search
<MikeFair>
Type: everything that could ever be written in 3200 lower case letters is already stored in the library of babel. including these two sentences.
<MikeFair>
You should get back a list of addresses
<AphelionZ>
I see... kind of how infinity contains all the other infinities but you can also reduce it to smaller infinities
<AphelionZ>
...or something like that
<AphelionZ>
I just watched a dumb youtube video about this
<AphelionZ>
like this thing is just monkeys typing on typewriters, as it were, but it will eventually spit out the complete works of shakespeare
akkad has joined #ipfs
<MikeFair>
AphelionZ: exactly
<MikeFair>
AphelionZ: But the thing that's unique/special about it is you can quickly "reverse"/"predict" the location of any particular text
<MikeFair>
So once you link together those addresses; you can make books
<MikeFair>
You then catalog those books as "Information"
<MikeFair>
The only hiccup at the moment is the addresses tend to get longer than the data
<AphelionZ>
yeah its the same mechanism for ipfs
<MikeFair>
and the addresses can't be stored in the library
<MikeFair>
AphelionZ: Did you check out the image gallery
<AphelionZ>
no
<MikeFair>
By treating the text as encoded binary; you can get back images
<AphelionZ>
lol im there now
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<MikeFair>
aka data files
<AphelionZ>
gotcha
<AphelionZ>
i wonder if something like this might be interesting to bring into ipfs to seed the content
<MikeFair>
So yeah; this is the equivalent idea to taking an IPFS hash address and turning it into the original content
<AphelionZ>
well thats kinda what im saying
<AphelionZ>
but kinda not
<MikeFair>
I keep thinking about it; but like I said before; the problem is the addressing scheme requires more space than the data itself
<AphelionZ>
what im saying is if i store two images having data [1,2,3] and [4,5,6] and I come along and I want [2,3,5] to make a text file
<MikeFair>
Right; those blocks are already there and don't need to be stored agin
<AphelionZ>
but i thought hashing the addresses compresses the data by summing it
<MikeFair>
That's the "deduplication" aspect
<AphelionZ>
yeah i get that, what i dont get is how the addresses here are longer than the data
<AphelionZ>
and im fully open to the idea that im thinking about this wrong, im just super curious
<AphelionZ>
like, can't i get a "recipe" of 500 hashes that make a novel, and those hashes will take what, 500*256 bits or whatever
<AphelionZ>
much smaller than the book itself
<MikeFair>
AphelionZ: Because it's a seed for a key generator where the next sequence of bytes to come out of the generator is the text you're looking for
<MikeFair>
AphelionZ: it's the 256 bits part that's the problem
<AphelionZ>
yeah explain that to me because im over here thinking "im addressing stuff thats larger than 256 bits, way larger"
<MikeFair>
AphelionZ: Think "Pigeon Hole"; An 8-bit number can encode every value from 0-255 ; if I give you a 4-bit space you only have 16 values
<AphelionZ>
ok
<MikeFair>
In practice; IPFS is relying a bit on mathmatical luck
<MikeFair>
There are multiple big data slices that can produce the same address
<AphelionZ>
yeah, i asked about that earlier and they said the chance was like 10^160 or something, but still nonzero
muvlon has joined #ipfs
<MikeFair>
In practice they are in the "noise" section of the "library"
* AphelionZ
nods
<MikeFair>
So no one ever wants that blocks
<MikeFair>
So no one ever wants that block
<MikeFair>
They want the one we called "information"
dryajov1 has joined #ipfs
<MikeFair>
If it becomes aproblem; you can also use a different/multiple hashes ; which is another thing IPFS does
<AphelionZ>
right, they can switch hashing algorithms with multihash
<AphelionZ>
but im still at the "isnt size of hash < size of content?" thing
<MikeFair>
The same block won't create the same hash using different algorithms (so while two blocks might create the same hash using one; the likelihood of two algorithms doing so is night impossible
pfrazee has quit [Ping timeout: 252 seconds]
<MikeFair>
AphelionZ: it is; which is A) why collisions _can_ (but don't) happen; and B) you can't use them to get the original data
dryajov1 has quit [Client Quit]
<AphelionZ>
yes, I understand the collisions thing
<AphelionZ>
how is `ipfs get HASH` different than "getting the original data"
<MikeFair>
it is getting the data
<MikeFair>
it's just that ipfs has explicitly stored it
<MikeFair>
the Library of Babel is just some code
<AphelionZ>
yeah im not advocating turning ipfs into the library of babel
<AphelionZ>
so if ipfs get HASH points to a large file, im first going to get a list of links from that hash's object right?
<MikeFair>
I know; just pointing out the distinction between having to store everything versus dydnamically create it :)
<AphelionZ>
of smaller chunks
<AphelionZ>
sure, sure
<MikeFair>
yes; that list is the DAG
<MikeFair>
ADDRESS == DAG entry
wallacoloo_____ has quit [Quit: wallacoloo_____]
<AphelionZ>
so what im saying is that those chunks can be mixed and matched to create other stuff
<MikeFair>
AphelionZ: Yes; but the chunks are 256 bytes; and that's a large enough number that in practice it only happens when versioning the same file
<AphelionZ>
ok thats kind of interesting
<AphelionZ>
so if i store a 256 byte gif that's all #FFF
<MikeFair>
It's extremely rare (statistically speaking) for two different pieces of actually distinct information to share 256 bytes of continuous data
<AphelionZ>
and then somebody stores a 5MB gif that's half #FFF
<MikeFair>
AphelionZ: Yeah; that'll work
<AphelionZ>
there you'd have identical chunks with identical... addresses?
<AphelionZ>
(thank you for walking me through this, this is incredibly helpful)
<MikeFair>
AphelionZ: exactly; half of their list will contain different addresses; but the other entries will all be the same address
wallacoloo_____ has joined #ipfs
<AphelionZ>
ok, and to have identical 256 byte chunks is rare, but still nonzero
<AphelionZ>
the piece i was missing was "continuous" data
<AphelionZ>
i bet there are paragraphs of text that are identical between like Dan Brown and Tom Clancy ;)
<MikeFair>
AphelionZ: Yes; the actual chunk size is selected by IPFS; it can pick whatever size it wants; it could theoretically even use different sizes (but that adds complications)
ygrek has joined #ipfs
<AphelionZ>
or compiled C headers just sayin
<MikeFair>
AphelionZ: I'd suspect that as people store bigger data the all 0s address comes up often
<AphelionZ>
MikeFair: that's really intersting
<MikeFair>
AphelionZ: same with the all FFs
<AphelionZ>
yeah honestly i bet there's statistical analysis to be done
<AphelionZ>
i bet there are distributions
<AphelionZ>
kinda lke Benfords law of number distributions
<AphelionZ>
that could certainly help with the timespace storage problem
<MikeFair>
AphelionZ: It has to do with conventional practices: primarily in "header" records and "padding" information
pfrazee has joined #ipfs
<MikeFair>
AphelionZ: I have an algortihm I'm working on to reduce the number of possible solutions to the hash to small enough number that you can "crack" the original data
<AphelionZ>
badass
<MikeFair>
Basically get you close enough so that a GPU could realistically "figure it out"
<AphelionZ>
you could do something similar to that RAISR thing, too
<AphelionZ>
figure out the first 25% of it and "machine learn" the rest
<AphelionZ>
also with multihash you could create "reserved" hashes that always describe smallish bits of data like headers
<AphelionZ>
Aa1 = stdin, or whatever
<MikeFair>
AphelionZ: The problem is I can't communicate the AI info to you in advance cheaper than I can just send you the data
<MikeFair>
:)
slothbag has left #ipfs [#ipfs]
<AphelionZ>
;)
<MikeFair>
AphelionZ: Make smaller codes for more frequently used data helps with table look ups; but that's not what I'm doing
<AphelionZ>
no you're trying to decompress hashes on the fly, i dig it
pfrazee has quit [Ping timeout: 245 seconds]
<AphelionZ>
im just still thinking out loud
<MikeFair>
AphelionZ: This algorithm is "describing" what the data "looks like" (think of a person giving a police sketch artist instructions)
<AphelionZ>
have you had any success yet?
<MikeFair>
AphelionZ: Actually decompress is the more important thing here; I don't care if it takes a while to compress (because that only happens once)
<MikeFair>
AphelionZ: A bit
<MikeFair>
And I came up with a new trick I'd like to test out when I can get some time to do it
<MikeFair>
But yeah; that's the space IPFS is playing in :)
<MikeFair>
Addressing all our info
<AphelionZ>
and i guarantee you some sequences of bits are stored more than others
<MikeFair>
Put in the README.md of IPFS that says "And you REALLY want to look at this doc"
<MikeFair>
:)
jedahan has joined #ipfs
tmg has quit [Ping timeout: 240 seconds]
sametsisartenep has quit [Quit: zzz]
ygrek_ has joined #ipfs
<AphelionZ>
daviddias: i am a ready and willing tester
<AphelionZ>
i just found myself wishing it was more like the native node fs api
<daviddias>
node fs API doesn't have a 'read all the files from this dir'
<daviddias>
just has a 'readFileStream', which is same as '.cat'
ygrek has quit [Ping timeout: 255 seconds]
<MikeFair>
daviddias: Any chance of adding function readFileStream(self, f) { return self.cat(f); }
<daviddias>
any strong reason for it?
<MikeFair>
I don't think of cat as a function I use read/open/getData
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<MikeFair>
I think of it as a commandline thing; and if the File I/O library I'm used to using calls it readFileStream; and this new library has something called readFileStream; then I've got a built-in expectation for what readFileStream does
aquentson has quit [Ping timeout: 245 seconds]
<daviddias>
Fair
<MikeFair>
If the normal file library I'm used to using called it cat; then I'd be asking the guy who made the "readFileStream" function to make cat be the function that works as expected ;)
<daviddias>
I understand
<MikeFair>
Don't get rid of the other one because that's what you guys call it
* MikeFair
nods.
<daviddias>
It is just important to have in mind that IPFS is extensive as it is
<daviddias>
and we have our API to be standardized across clients and implementations and languages
<daviddias>
so that it makes it easy to move around
<MikeFair>
which is why I suggested using a wrapper instead of a rename
<MikeFair>
or perhaps ipfs-js-jsFileIO
<MikeFair>
Has anyone in the group worked with iSCSI clients before?
<MikeFair>
Because I totally think you could make an IPLD tree that was a preconstructed hash table of sector addresses; then link to the data blocks
edrex has joined #ipfs
<MikeFair>
and make ipfs an iSCSI provider
<MikeFair>
(which is one of my thoughts about testing with IPLD)
<MikeFair>
hash table == oct tree or something like that
<MikeFair>
Something where the identifier/address of the "disk block" was a keyname in IPLD
mguentner has quit [Quit: WeeChat 1.6]
<MikeFair>
So the DAG link to the data block [0x8899aabbccddeeff] (64bits] is at the IPLD reference / 0x88 / 0x99 / 0xaa / 0xbb / 0xcc / 0xdd / 0xee / 0xff
<AphelionZ>
daviddias: hmm maybe i can use cat for my use case
pfrazee has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
wdfwefewvfgew has joined #ipfs
wdfwefewvfgew has left #ipfs [#ipfs]
<kyledrake>
whyrusleeping lgierth It's up! ipfscdn.neocitiesops.net (A and AAAA records). Right now it's in New York, Amsterdam, Silicon Valley, Miami, Tokyo, and Singapore. There's 3-6 more datacenters I can add it to. No http gateway or pinning or anything yet, just 4001. Ideas for what to do next are welcome. Run traceroute -A ipfscdn.neocitiesops.net to see the
<kthnnlg>
Hi All, I'm working behind a company firewall. Nevertheless, I would like, if possible, to run from behind the firewall an ipfs node that can serve files to the swarm. It is unlikely that I can open a port. So, does anyone know if there is a solution for my situation, or do I really need to open a port? Thanks in advance.
<lgierth>
kthnnlg: yes, the websockets transports on ports 80/443
<lgierth>
we'll have it right on ipfs.io in a few days
<lgierth>
so that looks just like regular http/httpd
<lgierth>
http/https
<lgierth>
kthnnlg: are you at inria? your hostname says so
<lgierth>
<3 inria
<kthnnlg>
Awesome! So, the idea is that anyone can run an ipfs daemon on their machine, regardless of whether they are behind a firewall or not?
bastianilso has quit [Quit: bastianilso]
<lgierth>
yeah we definitely wanna evade any restrictions of connectivity
<kthnnlg>
Indeed, what I'm hoping to do is create a means to transport large amounts of experimental data across the internet
<lgierth>
one more nice feature that's coming up is circuit switching, you'll be able to tunnel peer connections through another peer
<lgierth>
so that in your use case, you can reach even nodes that don't speak the websockets transport
<kthnnlg>
igierth: So, should I wait until after the websockets tutorial appears on the ipfs website? Or, would it be straightforward for me to attempt it now? I'm unfamiliar with websockets or what it does.
<lgierth>
websockets is just bidirectional streams through a longlived http connection
Boomerang has quit [Ping timeout: 245 seconds]
<lgierth>
it exists and works, but is a bit hacky and non-obvious at the moment, i'll be working on making that easy and straightforward the next two weeks
<lgierth>
it basically works by changing the usual addresses for connecting and listening from /ip4/.../tcp/4001/ipfs/... to /ip4/.../tcp/80/ws/ipfs/...
<kthnnlg>
lgierth: Super! I'm a huge fan of ipfs. This ability to operation behind a firewall is exactly the sort of feature that I need for it to be useful.
<lgierth>
or /dns4/example.com/tcp/443/wss/ipfs/... if there's https involved
<lgierth>
great :)
Boomerang has joined #ipfs
dryajov1 has joined #ipfs
<lgierth>
the primary motivationf or it at the moment is interoperability with ipfs in web browsers and websites, but evading firewalls is a very welcome side-effect
bastianilso has joined #ipfs
<kthnnlg>
lgierth: is there a tutorial available, or is the essential knowledge spread across multiple locations?
<lgierth>
for websockets there's nothing at the moment, just code
<lgierth>
verz non-obvious code :)
<kthnnlg>
:)
<kthnnlg>
ok, maybe I'll wait for the tutorial
<kthnnlg>
I have a few ideas w.r.t. ipfs that I've been aching to try
<lgierth>
cool
<lgierth>
feel free to write them up in the ipfs/notes repository :)
<kthnnlg>
One is using ipfs to host my experiments. It's pretty obvious, but nonetheless helpful
<kthnnlg>
The other one is porting a purely functional data structure that we developed a few years back
<kthnnlg>
ok
<lgierth>
haah. ipfs is perfect for functional data structures
voldyman has joined #ipfs
<kthnnlg>
Yea, we have one called chunkedseq that can push/pop on both ends in constant time, concat in log time, and split at a specified position in log time. Internally it's a tree representation whose branching factor is a parameter to the structure. So, my hope is to be able to build some humongous instances of this structure in ipfs. :)
<lgierth>
have you looked at ipld yet? github.com/ipld/specs -- i think it's going to be useful if you're looking into data structure
<lgierth>
i'm not too good with data structures unfortunately -- my area is networking :)
Foxcool has quit [Read error: Connection reset by peer]
Foxcool has joined #ipfs
oma[m] has left #ipfs ["User left"]
Foxcool has quit [Read error: Connection reset by peer]
jonnycrunch has joined #ipfs
apiarian has joined #ipfs
ulrichard has quit [Remote host closed the connection]
ylp has left #ipfs [#ipfs]
apiarian has quit [Ping timeout: 240 seconds]
<frood>
lgierth: have you guys had any problems with websocket set-up times?
<lgierth>
frood: not sure right now, i haven't worked much on the browser/js side of that. i would guess so though
<frood>
switiching protocols adds several hundred ms and there doesn't seem to be a good way to mitigate that
<lgierth>
all the usual properties of websockets apply
<frood>
webRTC also evades firewalls, but you need an introduction
<frood>
well, sort-of.
<lgierth>
there's unfortunately no good webrtc impl in go
<frood>
ah, that's unfortunate
<lgierth>
there's a /libp2p-webrtc-star transports though which browsers and nodejs nodes can use
<noffle>
morning o/
<lgierth>
hey hey noffle
<noffle>
lgierth: wie gehts lars? :)
* lgierth
has a cold
<lgierth>
coding with my mouth open all day ;)
<lgierth>
and you/
<noffle>
aw
<noffle>
chilly; it's a surprisingly cold morning in san jose, and my apartment has no heat
<noffle>
but warm coffee fixes that
<lgierth>
:)
<noffle>
what are you hacking on lately?
<lgierth>
today fixing migrations stuff that we screwed up in the 0.4.5 release :/
<lgierth>
and preparing for the ipfs-in-web-browsers sprint
<lgierth>
(the go-ipfs and ipfs.io side of it)
<noffle>
cool
aquentson has joined #ipfs
<lgierth>
what's the map of the day?
<noffle>
playing with http://ledger-cli.org for $$$ management right now; then maybe see if I can get ipget working entirely on gx deps so the build repros. haven't done any ipfs hacking in a while!
<noffle>
code for money in the afternoon
wak-work_ has joined #ipfs
aquentson1 has quit [Ping timeout: 255 seconds]
wak-work has quit [Ping timeout: 255 seconds]
f[x] has joined #ipfs
<sprint-helper>
The next event "All Hands Call" is in 15 minutes.
<noffle>
is this on github somewhere? I can send a pr
<Kubuxu>
whyrusleeping/gexpin iirc
<noffle>
yep
<noffle>
thanks for all the help
<victorbjelkholm>
Kubuxu: your makefiles stuff have been merged no? Trying to run the go-ipfs job on ci.ipfs.team with origin/master but doesn't work
<Kubuxu>
it was
<Kubuxu>
you didn't deploy jenkins
<Kubuxu>
after it was merged
<Kubuxu>
victorbjelkholm: ^^
<victorbjelkholm>
ooooh
<victorbjelkholm>
I see. My bad. Thanks Kubuxu for spotting my sillynesses
<Kubuxu>
but it won't work without workers
<Kubuxu>
stronger workers
<Kubuxu>
so deploying it won't change much
wak-work has quit [Ping timeout: 276 seconds]
apiarian has joined #ipfs
<kthnnlg>
Quick question: I have a file that I would like to share with peers via the internet. Now, I notice that if I take the hash code $h and run `ipfs get $h`, then the command times out. However, if I run ìpfs daemon` in the background, the transfer succeeds. So, my question is, in general, must a client always run ìpfs daemon` in order to receive files via the network?
cemerick has joined #ipfs
<lgierth>
kthnnlg: yes
<lgierth>
there's no "adhoc nodes" at the moment
<kthnnlg>
super, thanks
<lgierth>
i *think* if the file is already stored locally, it'll work without a daemon running
<lgierth>
at least ipfs add works without a daemon running
<kthnnlg>
correct
<lgierth>
cool
edrex has quit [Ping timeout: 264 seconds]
Boomerang has quit [Quit: Lost terminal]
cwahlers has quit [Ping timeout: 256 seconds]
<kumavis>
> I watched your talk from the Seattle hackathon and I was wondering how subscribing to blocks works... in particular is the whitelisting/blacklisting? how do you stop someone from spamming invalid blocks into the pubsub?
<kumavis>
afdudley yes, this is a good question. requires managing peer reputation and kicking them if they are being spammy. that can be generic libp2p level logic. additionally you would want to add app-level verification (check eth-block PoW) and have that also affect peer reputation
kthnnlg has quit [Remote host closed the connection]
cwahlers has joined #ipfs
galois_d_ has joined #ipfs
wak-work has joined #ipfs
galois_dmz has quit [Ping timeout: 256 seconds]
s_kunk has quit [Ping timeout: 240 seconds]
G-Ray_ has quit [Quit: G-Ray_]
adn[m] has joined #ipfs
kthnnlg has joined #ipfs
<kthnnlg>
One more quick question: to download a remove file, I run ìpfs daemon; pid=$!; sleep 20; ipfs get ...; kill $pid`. The concern I have is the use of the sleep command. It seems quite brittle. Is there a better way to ensure that by the ìpfs get...` I have an ipfs daemon running already? thanks
<kthnnlg>
download a **remote** file
<noffle>
rad progress on ipld
rcat has joined #ipfs
<Kubuxu>
kthnnlg: yes, to download you have to run daemon
cyanobacteria has joined #ipfs
<frood>
kthnnlg: hacky way to check if it's running: ps aux | grep '[i]pfs daemon'
<kthnnlg>
frood: right, that may work
<kthnnlg>
it's less hacky than pausing for 20 seconds
<kthnnlg>
;)
<frood>
can use awk to grab the pid from there too
<AniSky_>
whyrusleeping compelling reason for IPFS on the browser: caching, small library upgrades, etc. For example, say jQuery pushes a massive security update with 3 lines of code changed. IPFS with Rabin means only one block gets updated, thus not killing every cache everywhere, whereas current web browser caches get invalidated.
<AniSky_>
Which means massive strains on the internet to repopulate caches.
<AniSky_>
Plus the standard "your neighbor has jQuery, why can't you get it from them."
hundchen_ has joined #ipfs
s_kunk has joined #ipfs
hundchen_ has quit [Client Quit]
hundchen_ has joined #ipfs
hundchenkatze has quit [Ping timeout: 256 seconds]
yoshuawuyts has quit [Excess Flood]
cyanobacteria has quit [Ping timeout: 255 seconds]
<AphelionZ>
I need to run my ipfs api element past the functional programming crew
Encrypt has joined #ipfs
palkeo has joined #ipfs
sprint-helper has quit [Remote host closed the connection]
sprint-helper has joined #ipfs
tmg has joined #ipfs
Encrypt has quit [Quit: Quit]
<MikeFair>
AphelionZ: I just came up with a psuedo need for a similar concept
<MikeFair>
:)
aquentson1 has quit [Ping timeout: 245 seconds]
<MikeFair>
well earlier today :)
<AphelionZ>
yeah? you should help me user test it!
<AphelionZ>
i need feedback
aquentson has joined #ipfs
<MikeFair>
CryptoCurrency networks deal with a lot of encoded text strings
<MikeFair>
And to actually execute a multiple signature required txn requires the text string be in the presence of each person in turn
<MikeFair>
They "Sign" by adding their crypto signature to it
<MikeFair>
I've been adding that there needs to be an "In Network" way to store the presigned txn so users can find it; sign it; and update it for the other people
<MikeFair>
So that way people can sign as they get the opportunity to
<MikeFair>
And their software can look to see if their accounts have any "pending signature requests"
<MikeFair>
AphelionZ: is it up anywhere
<AphelionZ>
AphelionZ: i can't host it anywhere because it requires port 5001 open
<AphelionZ>
MikeFair: i mean
<AphelionZ>
see above hehe
<AphelionZ>
so you need to download it and run it with python -m SimpleHttpServer or some such thing
<MikeFair>
ipfs.io/ipfs/Your_page.html ?
<AphelionZ>
no im not being clear
<AphelionZ>
my app makes HTTP calls to localhost:5001, which is the ipfs HTTP API
galois_d_ has quit [Remote host closed the connection]
<MikeFair>
I thought it was an client side JS thing
<AphelionZ>
so to run it you need an ipfs daemon running locally
<MikeFair>
RIght; I have that
gmoro has joined #ipfs
<AphelionZ>
hmm, you raise an interesting point, which I want to test soon
galois_dmz has joined #ipfs
<MikeFair>
So it should matter where the script comes from :)
<AphelionZ>
but for now i havent hosted it anywhere because of that requierment
<AphelionZ>
requirement*
<AphelionZ>
since API and app need to be on the same domain
<MikeFair>
have you the directory for your project in ipfs ?
<AphelionZ>
no, i havent added it yet. just on github
<MikeFair>
maybe try github.io and I'll test it
<AphelionZ>
no, it wont work thats what im saying
aquentson1 has joined #ipfs
<AphelionZ>
_it needs to be run locally right now_
<MikeFair>
It'll generate a CORS thing
electrostereotyp has quit [K-Lined]
aquentson has quit [Ping timeout: 260 seconds]
ZaZ has quit [Read error: Connection reset by peer]
<MikeFair>
Hmm; well I think it will get really close; but we'll just have to see :)
<MikeFair>
If I have ipfs daemon running; and I load the page; broswer should just do what its told
<MikeFair>
The thing I'd like to figure out is how to securely store a private key in ipfs
<AphelionZ>
whats the command to add a whole directory to ipfs?
<AphelionZ>
ipfs add -wr . ?
<MikeFair>
ipfs add -r dirname
<MikeFair>
oh that'll probably work too if you're inside
<AphelionZ>
and that will give me a fancy HASH/posix/file/system thing?
<MikeFair>
you can see it in ipfs files
<whyrusleeping>
MikeFair: storing a private key?
<whyrusleeping>
You'll just want to make sure you encrypt it securely
<MikeFair>
whyrusleeping: Yes
<MikeFair>
There are two use cases
<MikeFair>
(1) Backup my own keys; and (2) automated bot signing
ShalokShalom_ is now known as ShalokShalom
pfrazee has quit [Remote host closed the connection]
<AphelionZ>
i need that for user auth too, eventually
ribasushi has quit [Ping timeout: 260 seconds]
AniSky_ has joined #ipfs
<MikeFair>
I'm thought experimenting about how to enable a bot to sign for txns on the Stellar Network without having to embed something secret in the code
<MikeFair>
As well as cryptokey recovery services
<MikeFair>
So when you make your key (keysafe) you can have it backed up
<MikeFair>
But if you forgot your passphrase it doesn't help; To recover the passphrase; I was thinking something like break up and encrypt the passphrase and store with at least three different non-malicious agents
<MikeFair>
Somehow; only when all three pieces are brought together can you get any information out of it
<AphelionZ>
triforce auth
<MikeFair>
(and seeing an individual pieve doesn't help you figure out which section/piece you have
<AphelionZ>
isnt that kinda still only 256*3 bits of entropy
<AphelionZ>
because you just need to guess three hashes
<AphelionZ>
not that that's easy, by any means
<MikeFair>
Well one way to attack me is to attack my backups
<MikeFair>
So either the locations need to be secret; or it's to be hard enough to get that you'll find easier fish
<MikeFair>
And given that the thing that's backed up is the passphrase to unlock all the signing keys; it could make a difference
<MikeFair>
For the bot; I'm hoping it can make its own random secret; then store it in IPFS obscured; so that I can't watch its block requests to figure out where it put it
<whyrusleeping>
MikeFair: i like that idea of splitting things up
<MikeFair>
yeah I was thinking if the key was "hidden" inside of 3 larger blocks; then you'd have to know the blcoks and the locations within them
<MikeFair>
Maybe using a PSRG to come up with the "byte address" within the set
<MikeFair>
err byte address sequence
<MikeFair>
then you'd just need a seed; and the blocks
<MikeFair>
and more specifically an non-attackable seed
<MikeFair>
So far I keep coming up empty :)
<MikeFair>
Given that we have to assume the algorithmic method is known to all; there's no method I can find that the script could execute that we couldn't just replay
<MikeFair>
how can a script invent a secret for itself (a seed key for a PSRG)
ygrek has joined #ipfs
<MikeFair>
and store it in a way we couldn't predict; but that it can recreate
<whyrusleeping>
Yeah... this really sounds like youre looking at security through obscurity ;)
<whyrusleeping>
what you really need is to generate a really large key, use that to encrypt the stuff
aquentson has joined #ipfs
<whyrusleeping>
every other strategy doesnt really increase the security, it just makes it harder to figure out how to break it
<MikeFair>
(if not in implementation; just in concept; I guess RAID too)
<whyrusleeping>
yeah, raid 5-6 etc use reed solomon type coding
ShalokShalom has joined #ipfs
<whyrusleeping>
'erasure coding'
<whyrusleeping>
kevina: oh, yeah.
<MikeFair>
whyrusleeping: I can't decided if I like that or not ;) (one less piece required by the theoretical attacker)
<MikeFair>
yeah; I like it
<MikeFair>
:)
pfrazee has quit [Ping timeout: 255 seconds]
<frood>
I wish R-S was better than O(n^2)
<MikeFair>
I could give the script a credit card type smart chip
<MikeFair>
install a reader on teh machine
<frood>
how's your physical security?
<AniSky_>
whyrusleeping I'd like to put together a dossier about the different problems on the web and how IPFS can solve them.
<MikeFair>
I'm more worried about online attacks
<AniSky_>
Could I ask for reviews of sections as I write?
<AniSky_>
Some areas I'd like to cover: Caching, Availability, Neutrality
ygrek has joined #ipfs
<cblgh>
AniSky_: i
<cblgh>
i'd be interested in reading that when you're done*
<cblgh>
pls ping in that case :3
ygrek_ has quit [Ping timeout: 240 seconds]
<MikeFair>
frood: The idea of using a passphrase/username and some salt to make a "login session" without a central server really inspired me. "A logged in session directly to the Internet" not some centralized domain
ygrek_ has joined #ipfs
<whyrusleeping>
AniSky_: you should definitely do that, we can provide review :)
<AniSky_>
Alright. I've been making some progress.
<AniSky_>
Will make a github repo.
<whyrusleeping>
:)
<AniSky_>
Unless you guys want to? :)
<AniSky_>
The target is somewhat-technical people but no code or algorithms, so comprehendible by your average IT guy.
ygrek has quit [Ping timeout: 240 seconds]
<AniSky_>
It's basically a giant expansion to "why we need this" from the specification.
<AniSky_>
whyrusleeping
<AniSky_>
Plus some cute diagrams.
pfrazee has joined #ipfs
pfrazee has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
<whyrusleeping>
AniSky_: we can always pull your repo into the org later
<AniSky_>
Alright.
Hein has quit [K-Lined]
pfrazee has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
jkilpatr has quit [Ping timeout: 240 seconds]
<MikeFair>
Is ipfs dag get supposed to be able to retrieve ordinary json object entries or only links?
<MikeFair>
I added a small json doc with ipfs put
<MikeFair>
and then tried ipfs get; and it and only bring back the name of the top level object
<MikeFair>
json => "top":{"someField":"someData"}
<whyrusleeping>
kevina: reviewed
pfrazee has quit [Ping timeout: 255 seconds]
<MikeFair>
but ipfs dag thehash/top keeps complaining about resolving through something with no links
Malus has joined #ipfs
pfrazee has joined #ipfs
f[x] has joined #ipfs
<dryajov>
are there any plans for a c++ implementation?
aquentson1 has joined #ipfs
ygrek_ has quit [Ping timeout: 252 seconds]
rendar has quit [Ping timeout: 252 seconds]
dimitarvp has joined #ipfs
aquentson has quit [Ping timeout: 255 seconds]
<whyrusleeping>
MikeFair: ah, yeah. Thats going to be supported very soon
<whyrusleeping>
sorry about that
<MikeFair>
no worries I just wanted to know I was using it properly; so does that me my simply test needs to make a link entry be the top thing?
<whyrusleeping>
you can have a link entry anywhere in there
<whyrusleeping>
but the current (incomplete) version of dag get can only hit paths that point to a link
<MikeFair>
I'm just trying to test the output
pfrazee has quit [Read error: Connection reset by peer]
Malus has quit [K-Lined]
pfrazee has joined #ipfs
<AniSky_>
whyrusleeping is the IPFS.io website on IPFS? If so, what's the hash?
<whyrusleeping>
dig _dnslink.ipfs.io TXT
grosscol has quit [Quit: Leaving]
rendar has joined #ipfs
<AphelionZ>
whyrusleeping: can I pick your brain about how to implement versioning in my pastebin?
<whyrusleeping>
er, i guess we're just using the main dns record for the main site
<whyrusleeping>
dig ipfs.io TXT
<MikeFair>
AphelionZ: What was your question?
<whyrusleeping>
AphelionZ: i'm a little busy right now, but i can try to respond
<AphelionZ>
its ok, i can wait
<MikeFair>
AphelionZ: I assume you are able to get the hash for each change ; and you're now just wondering how to store it?
<AphelionZ>
MikeFair: yeah I want previous and next links on my UI
<AphelionZ>
i know I want to likely use ipld and write to the dag
<MikeFair>
AphelionZ: You have an ipns entry yes?
<AphelionZ>
not yet, but i can easily
<MikeFair>
Well you have to root the tree somewhere; so you can either root all of pastebin; all of that session; or all for that user (or some variant)
kerozene has quit [Ping timeout: 240 seconds]
<AphelionZ>
well its much simpler than that, i dont need to store the ENTIRE pastebin anywhere
<MikeFair>
I assume you want previous/next to work for other people on other browsers or is "only for this session" ok?
<AphelionZ>
no
<AphelionZ>
i need to store three pieces of metadata for each file
<MikeFair>
If you just keep track of the previous HASH on the current HASH then you can "follow the trail"
<AphelionZ>
mypastebin.zzzyyyy/#HASH
<MikeFair>
right; but that doesn't require an array
<AphelionZ>
I can go back the chain with just prev, yes
<AphelionZ>
if I create content from the hash
<MikeFair>
either way; array is fine
<AphelionZ>
and thne you alter the hash with different content
<AphelionZ>
it can fork
slothbag has joined #ipfs
<MikeFair>
What you can't do unfortunately is add "forward history" to a link; well you can use browser storage to carry the information between hashes; but it means historical hashes would move
<MikeFair>
I'm thinking just put your data in that Object
<AphelionZ>
I could write an intermediary object between the current and the next revision
<MikeFair>
I don't see how that helps with anything
<AphelionZ>
but even then the original object will have no way of referencing
<AphelionZ>
yeah
<AphelionZ>
yeah i guess you can only go backwards
<MikeFair>
once they hit "submit" the block is essentially locked
cyanobacteria has joined #ipfs
<AphelionZ>
thats too bad, i'd love to be able to land in the middle of the chain and traverse forward and backward
<AphelionZ>
but you can't do it without local storage
<MikeFair>
It knows its history; but not its future
<AphelionZ>
and to your point, you could potentially do it with ipns
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<AphelionZ>
but that's a significantly more complex system
<MikeFair>
Well you can with an ipns root
<MikeFair>
not really
<MikeFair>
well let's start with history and we can add ipns to the ui later
<AphelionZ>
ok so i have an ipns root that's the pastebin's "database"
<AphelionZ>
constantly shifting
<MikeFair>
the data won't come from what you stored; it'll get looked up separately
<MikeFair>
yeah
<MikeFair>
constantly keeping track of the "head" nodes
<AphelionZ>
and that simply points to a json object
<AphelionZ>
do you use a message queue or something to keep track of race conditions
<MikeFair>
you ignore them
<AphelionZ>
if i have 1000 concurrent users
<AphelionZ>
why ignore them?
<MikeFair>
each get a different file in ipns
<AphelionZ>
i see, ok
<MikeFair>
ipns points to a directory of json files
<AphelionZ>
so that's effectively their "user id"
<AphelionZ>
oh interesting
<AphelionZ>
ok
<MikeFair>
named for their user_id sounds perfect
<MikeFair>
but let's just get "back" working because I think you can do that pretty quickly
<AphelionZ>
yeah, agreed
jkilpatr has joined #ipfs
<MikeFair>
The user hits submit and you have/put together the object {"prev": [Array of HASH], "data":"theText", "language":"thelanguage"}
<AphelionZ>
prev doesnt need to be an array
<AphelionZ>
I only thought next did
<AphelionZ>
and i wont need language just yet because im not doing syntax highlighting
M-ou-se has quit [Ping timeout: 258 seconds]
<MikeFair>
you can even do {"head":"IPNS_HASH", "prev": [Array of HASH], "data":"theText", "language":"thelanguage"}
<MikeFair>
It's harmless to add them
<AphelionZ>
so whats the difference between dag data and this
<AphelionZ>
i thought there was some sort of underlying metadata structure
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<MikeFair>
Nothing; in DAG data you'd have an extra option to put a link object in there
<MikeFair>
"/":{"link":"HASH"}
M-ou-se has joined #ipfs
<MikeFair>
which is directly traversable by ipfs dag get
<AphelionZ>
...so what's link? what does link describe?
<MikeFair>
So you have all these little JSON snippet HASHes out there
<MikeFair>
THe HASH of another JSON object
<AphelionZ>
isnt that more like what I want?
<AphelionZ>
im not trying to be dense i just want to make sure i understand this
<MikeFair>
I'm not sure yet
<MikeFair>
but it doesn't work atm
<AphelionZ>
oh
<AphelionZ>
haha
<MikeFair>
At least I haven't been able to make it do anything useful
<MikeFair>
I can put data in; just not get it out
<MikeFair>
With DAG; that [Array of HASH] in prev would be an array of links
<MikeFair>
SO Address: ipfs dag get HashOfCurrentNode/prev/0/data
<AphelionZ>
see that sounds super useful
<AphelionZ>
but if its not ready its not ready
<MikeFair>
Would go to the current nodes json file; enter the prev field; and find the 0th element of the array (which it sees is a link to another HASH); and traverse that HASH to get the data field; which is a string
<MikeFair>
and return that string
DiCE1904_ has quit [Read error: Connection reset by peer]
ygrek has joined #ipfs
captain_morgan has quit [Remote host closed the connection]
<MikeFair>
If you change any part of the subtree; all the address changes roll up the chain and move the addresses of the database
<AphelionZ>
gonna do #7 and #8 first because writing unit tests brought those to light
<AphelionZ>
as writing unit tests is wont to do
unquellable has joined #ipfs
<MikeFair>
But yeah if you just load the whole object up wit hall the metadata and teh data in one object you'll get a single hash that does both foryou
<AphelionZ>
yeah and that my be simpler in the end
<AphelionZ>
may*
<MikeFair>
and it's directly compatible with the DAG approach
<MikeFair>
When you're/it's ready instead of setting/getting HASHes from the array; you put links
<MikeFair>
then you can read all of history as if it was all in one mgical JSON document
koshii has quit [Read error: Connection reset by peer]
matoro has joined #ipfs
gmoro has quit [Remote host closed the connection]
gmoro has joined #ipfs
koshii has joined #ipfs
pfrazee has quit [Ping timeout: 260 seconds]
soloojos has quit [Ping timeout: 240 seconds]
<dimitarvp>
Hey all. A pretty general question follows. I'd like to test IPFS locally with several hundred simulated nodes -- I want some of them to pin several hashes, others to download, maybe kill some of them etc. Can somebody provide a good reference to open-source software that can provide you with such a local virtual infrastructure? I know about several but I don't have the necessary expertise to make a choice. :(