<MikeFair>
lgierth: whyrusleeping gave some links of where to create the new format, and where to register it, but I lost them. I don't suppose you could point me in the right direction again
vsimon has quit [Remote host closed the connection]
wking has joined #ipfs
red5d[m] has joined #ipfs
dimitarvp has quit [Quit: Bye]
Anton_ has quit [Ping timeout: 264 seconds]
infinity0_ has joined #ipfs
infinity0_ has quit [Changing host]
infinity0 has joined #ipfs
infinity0 has quit [Killed (verne.freenode.net (Nickname regained by services))]
Anton_ has joined #ipfs
Fess has quit [Quit: Leaving]
<lgierth>
also /topic has logs ;)
jesse22 has joined #ipfs
vsimon has joined #ipfs
* MikeFair
sighs.
<MikeFair>
yep :)
<MikeFair>
Is there anything aside from those two?
<lgierth>
i'm not 100% sure - stebalien would know too
<MikeFair>
that I should be focusing on -- I'm assuming there aren't any proper docs walking someone through this ---- and I'm hopefully that eventually these will be "loadable"
<MikeFair>
So ipfs dag get 'git-hash-in-CIDv1-format'
<lgierth>
yeah, and the hashes will be "the same"
<MikeFair>
because CIDv1 let's you do that
<lgierth>
last big limitation is that git blobs are regularly over 2MiB big, and bitswap will currently not transfer them
<MikeFair>
I wonder if those could added to ipfs, then have the git commit reference the blobs that way
<MikeFair>
(the ipfs localCache on the machine hosting the git repo)
<MikeFair>
this likely doesn't work o nwindows does it
<MikeFair>
plugins that is
<jesse22>
whyrusleeping: got some graphs working
rcat has quit [Remote host closed the connection]
vsimon has quit [Remote host closed the connection]
gozala has quit [Remote host closed the connection]
vsimon has joined #ipfs
newhouse has quit [Read error: Connection reset by peer]
leeola has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
jesse22: yeah?
<MikeFair>
Can anyone tell me what the new /gx/ business I'm beginning to see is?
samm has joined #ipfs
<jesse22>
whyrusleeping: that repo had 10 second sampling rate which i think was too small, so i updated it to 1 second resolution - i'm still not sure to make of the data
<jesse22>
which looks nice, i think, and decent (but something like 1.5/2x slower than i'd like to see)
<MikeFair>
Is the second one the first time you downloaded the file, or on a second machine
<jesse22>
MikeFair: each test is immediately after i clear the repo with 'gc'
<jesse22>
so the second graph is from a single host on a LAN machine next to me. the other test is from a maybe 6-7 geographically distributed pinned copies
<jesse22>
for some reason the more distributed files get the slower the transfer speeds
<MikeFair>
Did you isolate just the two nodes?
<MikeFair>
Like pull the upstream network plug or something?
<jesse22>
the data is isolated, yes - different data, same rough size, but they're all part of the same swarm
<MikeFair>
I'm assuming this is go-ipfs -- or is it js-ipfs?
<jesse22>
only go-ipfs
<MikeFair>
okay then, when you say "node right next to me" I'm not sure what that means --- did that local node pin all the blocks for the file?
<MikeFair>
When you add the file it gets distributed around the swarm
<jesse22>
yes, exactly, that node added a file (containing unique blocks to that node) - the "right next to me" means, more or less on the same LAN segment (as opposed to hopping routers/WAN links, etc.)
<MikeFair>
jesse22: But adding a file to a local computer doesn't mean the file is going to come from there
<jesse22>
if the blocks are only unique to that host, and no other host (other than mine) have requested those blocks, they're going to stay on that node, yes?
<MikeFair>
I don't think so
<jesse22>
hosts don't opportunitistically request random blocks, do they?
<jesse22>
shoot, i have to go
<MikeFair>
It's about where the XOR algo thinks the block should be in the swarm
<jesse22>
'should be in the swarm' that's interesting - i guess i'm missing smoething core then
<MikeFair>
Your local node might not see that local computer as a "peer"
<lgierth>
if we're connected with them, we'll always ask them for what's in our wantlist, no matter their key distance in XOR space
<MikeFair>
lgierth: help clarify something for me; when we "ipfs add" where do the blocks end up
<MikeFair>
on the node where we "added" or spread throughout the swarm?
<MikeFair>
for go-ipds
<lgierth>
only local
<lgierth>
oh did i miss something? sorry if my comment was out of context
<MikeFair>
no totally in context
<lgierth>
blocks are never "pushed" into the network, only records are (e.g. for peer routing and content routing)
<lgierth>
(or for ipns)
<MikeFair>
I'm working with partial understanding, so two computers on two local subnets taking a long time to transfer files to each other
<MikeFair>
okay, so the blocks stay "here" and the provides list gets updated
<MikeFair>
So when a remote node asks; how does it find my peer id? --- I thought it would go through its XOR routing table asking "the closest" node it can find
<TUSF>
I have a question. Two actually. First: Is there any way to see which of my specific objects are being downloaded? `ipfs stats bw` lets me see my overall bandwidth, and my current up/down speed.
<TUSF>
But there's no way to know which file is being downloaded.
gozala has joined #ipfs
<Magik6k>
TUSF: you should be able to see that in 'ipfs log tail', not sure what to grep for though
<TUSF>
Second: the p2p subcommand looks pretty useful, but it got me thinking; what happens if two IPFS peers have the same peer ID (such as if I copied the .ipfs folder onto multiple machines, and ran the same daemon)
<Magik6k>
"undefined behavior" is probably the most accurate description of what would happen
<TUSF>
Fun.
<Magik6k>
You'd likely endup connected to one peer and the other wouldn't see anything
engdesart has left #ipfs ["no"]
<TUSF>
Thought it might be interesting if multiple daemons (with the same ID) could use "ipfs p2p listener" to treat IPFS as a sort of load balancer.
<MikeFair>
TUSF: FWIW, I envision peer ids evolving to mean "service instance ids"
<TUSF>
Would probably be better if the listener accepted a generated key, instead of ones PeerID actually.
<MikeFair>
The idea being multiple nodes in the network would publish the same id to announce they provide the same service
<MikeFair>
though that could be a CID (or SID) instead of a Peer ID
<TUSF>
Yeah, that's what I'm imagining; for such a function it'd be better if the keys generated from "ipfs key" were used instead.
<Magik6k>
That's more of a thing that could be done at pubsub level
<MikeFair>
TUSF: Which you have via ipns
<MikeFair>
Magik6k: Agreed
<MikeFair>
I have a couple use cases, "search" being the most prominent one
<TUSF>
True. That's what I was originally thinking; having a client request service through pubsub, and a server responding to the client. Problem is verifying if a server is actually part of the correct "cluster".
<Magik6k>
afaict it's not possible to listen/dial to `ipfs key` addresses as they aren't really used as addresses in libp2p.
<TUSF>
Err, correct service?
<MikeFair>
I've decided that from a resources consumption standpoint ; it's way more efficient to send a small chunk of code that knows how to search though the content of a local repo for something and post out the results; than it is to shove all the content of ipfs through a node and filtering/classifying it
<Magik6k>
There are some issues about verified/authenticated pubsub
<MikeFair>
I was proposing that "private networks" could be used as a separate/second routing table for a swarm
<MikeFair>
So in a "private network" nodes only connect if the peer knows the secret
<Magik6k>
You can also have a service building/providing a search graph on top of IPLD, and then provide nodes with the means to use this index (effectively removing half of the work of a search provider)
<MikeFair>
Magik6k: Well the problem is the "search algorithm"
<MikeFair>
Magik6k: How you index is determined by what you're searching for
<MikeFair>
err kind of searching you're doing --- searching a music catalog for "Mozart's Requim" or an image database for "stop signs"
Xiti has quit [Quit: Xiti]
<MikeFair>
And when you search for "cats"; it's hard to imagine that what you really want is every textual reference to "cat[s]" in the entire ipfs swarm
<MikeFair>
I guess you could publish the various alogirhtms under different IPLD result trees
<MikeFair>
that'd work --- an open namespace -- rooted by the ipns address of the "saerch algo type" you want to traverse
<TUSF>
A couple months ago, I was thinking of using pubsub to replace a sort of torrent catalog. There'd be one big database of content with IPFS and bittorrent hashes that archivers can download, and clients who just want to search the database could make a query on pubsub, listen to a second room, and a server would give the results. It'd be a pretty simply system imo, but I never got around to it because lazy.
Xiti has joined #ipfs
astronavt has joined #ipfs
<MikeFair>
Same concept, just a bit more centralized version of it ---
<TUSF>
Well, the idea was that others could have their own server listen in on the same pubsub room, and offer to respond instead.
<MikeFair>
You fire off the request, get a channel id to listen on, pick up the results as they come in
<TUSF>
Yeah, that's pretty much it
<TUSF>
Problem is telling apart legitimate results, and trolls that are just blasting the channel with garbage results.
<lgierth>
try to build data structures, not request/response models
<MikeFair>
So if you tweak that to say every search provider is a piece of code (a CID) announcing that it provides "search" (an ipns reference to particular kind of search algo)
<lgierth>
every crawler can publish their latest search data structure as an IPNS key, and you as a user can have multiple crawlers on your to-query list
<MikeFair>
lgierth: OKay, when I think of a data structure for text, I picture a set of two letter directories that represent the first and last characters of the word (in english)
<TUSF>
So I guess let clients crawl the database for their desired results, instead of using pubsub?
<MikeFair>
the deeper the directory tree the longer the word
<lgierth>
MikeFair: that would be one -- there's tons of others and we're only starting to figure out how to model things on a DAG
<MikeFair>
but then there's "word ordering" (multiple words in the right sequence to each other)
<MikeFair>
astronavt: What are you thinking for "inverted index"? -- my dual letter based tree is an "inverted index" of sorts --- you can also do something similar with phonetics
tg has joined #ipfs
<astronavt>
MikeFair: oh i was just thinking for words, i think elasticsearch basically uses a big document-term incidence matrix
<astronavt>
so if i wanna search for "kitten" it just reads off the "kitten" row which documents have that token
<MikeFair>
I'm begining to have vague notions of a 3d or 4d (or Nd) data structure
<MikeFair>
astronavt: Yeah that's exactly what these would be
<astronavt>
i suppose you can have a second index on top of the word list itself
<astronavt>
like a b-tree or something?
<astronavt>
i'm pretty new to the whole data structures thing
<MikeFair>
in my description the "kitten" row would be "kn/ie/tt/"
<MikeFair>
It mimics how our brain recognizes words from both ends in -- as you get to the middle, there's fewer possibilities of candidates
<astronavt>
MikeFair: is there a name for that kind of structure
<MikeFair>
Oh yeah! I forgot the wordlength
<MikeFair>
"kn/ie/6tt/"
<astronavt>
that kinda naturally begets fuzzy searching too
<astronavt>
at least, for typos within words
<MikeFair>
yes it's sensitive to the "ends"
engdesart has joined #ipfs
<astronavt>
sorry i came into the conversation late, would an ipfs search provider host this index on ipfs itself?
<MikeFair>
to satisfy me it would have to --- which is why IPLD is so important
<MikeFair>
which is what lgierth was saying ; build the index and publish those into IPLD
vivus has quit [Quit: Leaving]
<MikeFair>
I have this uber complicated idea that 'concepts' are assigned tokens/symbols (unique hash ids) ; then a dictionary is produced in many languages to "describe" those concepts
<MikeFair>
It's a many - many mapping
<astronavt>
what is ipld again?
<MikeFair>
awesome ;)
<astronavt>
MikeFair: what you're describing is really cool but might involve a good deal of NLP expertise which we are only just now on the cutting edge of
<MikeFair>
InterPlanetary Linked Data
<MikeFair>
astronavt: For NLP like programming, I really like what the programming language "Inform7" has done
<MikeFair>
it's not NLP
<MikeFair>
but it sure feels close
<astronavt>
hmm
<astronavt>
interesting
<MikeFair>
I call IPLD a decentralized/distributed JSON database with node linking
<MikeFair>
(JSON with symlinks)
<MikeFair>
So that's a DB available to everyone
<MikeFair>
oh! --- I just realized -- IPLD now works well enough to make my fake hard drive
astronavt has quit [Remote host closed the connection]
ONI_Ghost has joined #ipfs
jakewalker has quit [Ping timeout: 264 seconds]
vsimon has quit [Remote host closed the connection]
Xe has left #ipfs [#ipfs]
Rusty78 has joined #ipfs
<akkad>
xb
<lgierth>
MikeFair: graph database
<lgierth>
it's a subset of a graph database really
<lgierth>
although i guess you could implement something like tables and joins too, who knows
<lgierth>
IPSQL
TUSF has joined #ipfs
<MikeFair>
Well Neo4j introduced a language called "Cypher" which I really like
jakewalker has joined #ipfs
<MikeFair>
If IPLD ever lifts the "Acyclic" requirement for an implemented structure, I'll be a happy camper
<MikeFair>
lgierth: The problem I see with the concept of "joins" is that all the data has to be pulled locally to my processing center first
t|f has joined #ipfs
<lgierth>
we want ipld selectors to run remotely too, and on partial data
<MikeFair>
if somehow I could get the results of a join over a distributed dataset before receiving both data sets, that'd be slick
<MikeFair>
I could see selectors implementing index slices
<MikeFair>
Heck, just getting a map/reduce capacity would be "AWESOME!"
<MikeFair>
lgierth: btw -- I can't really get go-ipfs to compile on my windows machine --- so a bit stuck on ipns add for the moment
<TUSF>
btw, got the answer to my earlier question; using grep "bitswap" should tell you about the blocks being sent and received… And a whole bunch of other data that's probably related…
<t|f>
Hello all! New to IPFS, trying to get my feet wet with a dweb FOAF addressbook side project, but having trouble fining IPLD examples and cidv1 (CBOR) examples. Anyone have suggestions on where to look?
ONI_Ghost has quit [Ping timeout: 260 seconds]
<TUSF>
Is tehre any documentation about what these events do?
<MikeFair>
Cygwin environment complains about check_go_version file ('\r' characters causing problems) --- and go get complains about
<MikeFair>
I have another linux laptop that I was using, but grub is complaining loudly about the drive's FS asking it to read outside the partition (I think a Windows 10 install was attempted)
<MikeFair>
oh hmm.... my webhost gives me a shell and some build utils ....
<t|f>
MikeFair: There's also Heroku, no idea how IPFS dev would work on that. They're kind of optimized for web dev
dconroy has joined #ipfs
<MikeFair>
Yeah, I just need to figure out how to install Go as a shell user
<MikeFair>
I'm also updating Go on this windows machine (was 1.7.5); to see if that fixes anything
<MikeFair>
TUSF: didn't mean to ignore you
<MikeFair>
TUSF: There are lots of docs, but I find they are spread out and localized to each effort
<MikeFair>
TUSF: I assume you're referring to the stuff ipfs is spitting out by the daemon?
dconroy has quit [Read error: Connection reset by peer]
TUSF has quit [Quit: Leaving]
vsimon has joined #ipfs
<MikeFair>
okay -- have a custom dir installation of Go on the web host --- I'll keep going ;)
plexigras has quit [Ping timeout: 255 seconds]
<t|f>
MikeFair: Nice, good luck! Anyone have any thoughts on where to find IPLD examples in Go? In a perfect world, examples using FOAF.
<MikeFair>
t|f: I don't quite follow FOAF and IPLD
<MikeFair>
That's kind of like asking for JSON examples that do FOAF
<AphelionZ>
any orbitdb folks here? is there a way to sort of set a flag in a replicated db and only read entries past a certain point? we're finding its taking a long time to replicate every log entry every time
<MikeFair>
What I did first was take a small JSON document and do: ipfs dag put JSON.doc
rodolf0 has joined #ipfs
rodolf0 has quit [Client Quit]
<MikeFair>
I then made a second JSON doc with: {"data": {"/": "CID_FROM_PREVIOUS_PUT"}}
<MikeFair>
and put that
<MikeFair>
I then did: ipfs dag get CID_OF_LINK_DOC
<MikeFair>
and: ipfs dag get CID_OF_LINK_DOC/data
<MikeFair>
worked as expected
<MikeFair>
AphelionZ: I don't think so, but you might be able to create predictably named dbs and merge those into current
<AphelionZ>
hmmm ok
Alpha64 has quit [Read error: Connection reset by peer]
<MikeFair>
AphelionZ: I think the challenge is OrbitDB can't assume half a database is okay
<AphelionZ>
i thought it would do some CRDT fun and merge it somehow
<AphelionZ>
maybe an append only log isn't really what I'm after
<MikeFair>
AphelionZ: and hasn't implemented the SEQUENCE tracking yet
<AphelionZ>
it's a quantified life app that I'm working on, so lots of tallies and counts
<AphelionZ>
and we have a log of like "fed dog, fed dog, fed dog" with timestamps and such
<AphelionZ>
and we just want to sync two devices
<AphelionZ>
p2p style
<MikeFair>
AphelionZ: Well if it's p2p style, and I sync with your device for the first time, what should I get?
<AphelionZ>
a baseline data set, and then a log of actions on that database
<AphelionZ>
that works fine
<AphelionZ>
what I'm worried about is the SECOND time you sync
<AphelionZ>
i dont think you should have to replicate the entire database again, nahmean?
<AphelionZ>
you already have a good chunk of the data
<MikeFair>
Agreed, I know how Couch does it
<MikeFair>
You need a SEQUENCE number and "replicate since that SEQUENCE"
<MikeFair>
each peer has a separate db I assume?
<MikeFair>
like if I synced my data with you, you'd have two dbs for two devices?
<MikeFair>
(not the same as what you asked, but close -- you can "load(100)" until you get overlap perhaps?
<AphelionZ>
Interesting
<t|f>
MikeFair: Thanks! I got pulled afk, some good stuff in there.
<MikeFair>
AphelionZ: I also think you're going to want to use "rolling logs" anyway
<MikeFair>
AphelionZ: Say a database of the list of all dbs; then create a new db whenever the next log entry doesn't match the date of the last log entry
<MikeFair>
AphelionZ: So the only thing you replicate is the database of all databases
<AphelionZ>
Ahhh interesting
<t|f>
I will experiment with ipfs dag tomorrow.
<MikeFair>
AphelionZ: You can then simply replicate the latest one you have locally (probably got that before you replicated first); and any further in the list
<MikeFair>
Scale to Year/Month/Day/Hour/Minute --- depending on frequency of data
<MikeFair>
For example, 365 dbname entries per year ; should replicate fast --- but I haven't really used OrbitDB yet
<MikeFair>
You can also roll a DB on a fixed number of entries (make a new log db every 100 entries)
<MikeFair>
(Not basing things on a clock is useful because computer clocks can be wrong)
<t|f>
CRDTs might be worth looking into too. IPFS seems a good fit for use with CRDTs since distributed and individual objects are immutable. https://vaughnvernon.co/?p=1012
<MikeFair>
t|f: They totally are ; and they are used; this is hapenning in the Bowels of the Replication scheme though
<MikeFair>
For some reason OrbitDB is replicating the entire DB instead of just the latest version (picking up where it left off)
<MikeFair>
I think it's merely a currently unimplemented feature
<MikeFair>
(Replication got a fairly recent overhaul iirc)
<t|f>
MikeFair: Cool. I will have to check out OrbitDB. Night all
vsimon has quit [Remote host closed the connection]
step21 has quit [Ping timeout: 248 seconds]
step21_ has joined #ipfs
step21_ is now known as step21
t|f has quit [Ping timeout: 260 seconds]
anewuser has quit [Ping timeout: 240 seconds]
samm has quit [Read error: Connection reset by peer]
sim590 has quit [Ping timeout: 264 seconds]
sim590 has joined #ipfs
makarukudo has joined #ipfs
wookiehangover has quit [Ping timeout: 268 seconds]
<makarukudo>
testing
wookiehangover has joined #ipfs
}ls{ has quit [Ping timeout: 255 seconds]
wookiehangover has quit [Ping timeout: 255 seconds]
}ls{ has joined #ipfs
wookiehangover has joined #ipfs
wookiehangover has quit [Quit: i'm out]
robattila256 has quit [Quit: WeeChat 2.0.1]
makarukudo has quit [Quit: Page closed]
vsimon has joined #ipfs
muravey has joined #ipfs
TUSF has joined #ipfs
MikeFair has quit [Ping timeout: 256 seconds]
Fess_ has joined #ipfs
Fess has quit [Ping timeout: 276 seconds]
ygrek_ has joined #ipfs
witten has quit [Ping timeout: 252 seconds]
hiei has quit [Ping timeout: 276 seconds]
hiei has joined #ipfs
muvlon has quit [Ping timeout: 276 seconds]
ygrek_ has quit [Ping timeout: 255 seconds]
muvlon has joined #ipfs
Fess_ has quit [Quit: Leaving]
TUSF has quit [Ping timeout: 276 seconds]
rendar has joined #ipfs
dconroy has joined #ipfs
MikeFair has joined #ipfs
TUSF has joined #ipfs
igorline has joined #ipfs
Mateon1 has quit [Remote host closed the connection]
Mateon1 has joined #ipfs
dconroy has quit [Max SendQ exceeded]
muravey has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
biodrone has quit [Ping timeout: 252 seconds]
G3nka1 has joined #ipfs
biodrone has joined #ipfs
chriscool1 has joined #ipfs
zautomata1 has quit [Read error: Connection reset by peer]
zautomata1 has joined #ipfs
toxync01- has joined #ipfs
toxync01- is now known as toxync01_
toxync01 has quit [Ping timeout: 240 seconds]
toxync01_ is now known as toxync01
zautomata1 has quit [Read error: Connection reset by peer]
zautomata1 has joined #ipfs
G3nka1 has quit [Ping timeout: 248 seconds]
AgenttiX has joined #ipfs
}ls{ has quit [Quit: real life interrupt]
ylp has joined #ipfs
whittymatt[m] has left #ipfs ["User left"]
zautomata1 has quit [Quit: WeeChat 1.7]
ONI_Ghost has quit [Read error: Connection reset by peer]
ONI_Ghost has joined #ipfs
ralphtheninja has joined #ipfs
ONI_Ghost has quit [Ping timeout: 240 seconds]
reit has quit [Quit: Leaving]
MikeFair has quit [Ping timeout: 256 seconds]
ilyaigpetrov has joined #ipfs
robattila256 has joined #ipfs
letmutx has joined #ipfs
Guest17071 has quit [Ping timeout: 268 seconds]
mtodor has joined #ipfs
robattila256 has quit [Quit: WeeChat 2.0.1]
mtodor has quit [Ping timeout: 256 seconds]
Ronsor has joined #ipfs
Ronsor is now known as Guest2418
robattila256 has joined #ipfs
Neomex has joined #ipfs
anewuser has joined #ipfs
igorline has quit [Ping timeout: 256 seconds]
plexigras has joined #ipfs
igorline has joined #ipfs
dimitarvp has joined #ipfs
Caterpillar2 has joined #ipfs
demize has left #ipfs ["WeeChat 2.0-rc1"]
zautomata has joined #ipfs
zautomata has quit [Client Quit]
Caterpillar has quit [Ping timeout: 256 seconds]
bomb-on has quit [Quit: zzz]
anewuser has quit [Quit: anewuser]
muvlon has quit [Ping timeout: 256 seconds]
bomb-on has joined #ipfs
muvlon has joined #ipfs
newhouse has joined #ipfs
newhouse has quit [Read error: Connection reset by peer]
newhouse has joined #ipfs
trqx has quit [Remote host closed the connection]
igorline has quit [Ping timeout: 256 seconds]
noffle has joined #ipfs
raynold has quit [Quit: Connection closed for inactivity]
<Taoki>
So... million dollar question I'm trying to find the answer to and base one of my ideas around:
<Taoki>
Pubsub: Is it persistent, or is it momentary? Like if you pub something to a sub, and an IPFS daemon comes offline later and reads that sub, will it still show up? Or is it more like TCP packages, where if you aren't online the moment they're sent you lose them forever?
<Taoki>
Will persistent pubsub ever exist, so it can be used to store chat histories and such across the network forever?
igorline has joined #ipfs
<JCaesar>
Well, you could publish objects that have a reference to the current message content and the previous object…
trqx has joined #ipfs
<Taoki>
Objects? IIRC you can only publish string messages
<JCaesar>
Well, but you're free to pick any kind of content for that message? why not make it an ipfs path, then…
<JCaesar>
(or the thing referenced by that path.)
BillBarnhill has joined #ipfs
<BillBarnhill>
Good morning all
NullbutC00L has joined #ipfs
reit has joined #ipfs
<Taoki>
Hmmm... fair enough
Neomex has quit [Ping timeout: 276 seconds]
Alpha64 has joined #ipfs
ONI_Ghost has joined #ipfs
BillBarnhill has quit [Ping timeout: 260 seconds]
Neomex has joined #ipfs
Encrypt has joined #ipfs
NullbutC00L has quit [Ping timeout: 260 seconds]
trqx has quit [Remote host closed the connection]
trqx has joined #ipfs
hipboi_ has quit [Ping timeout: 265 seconds]
Neomex has quit [Read error: Connection reset by peer]
Rusty78 has quit [Read error: Connection reset by peer]
Rustyshack has joined #ipfs
hipboi_ has joined #ipfs
letmutx has quit [Quit: Connection closed for inactivity]
rendar has quit [Ping timeout: 276 seconds]
Rustyshack has quit [Read error: Connection reset by peer]
Rustyshack has joined #ipfs
Alpha64_ has joined #ipfs
Alpha64 has quit [Ping timeout: 256 seconds]
rendar has joined #ipfs
rendar has quit [Excess Flood]
rendar has joined #ipfs
rendar has quit [Changing host]
rendar has joined #ipfs
rendar has quit [Excess Flood]
rendar has joined #ipfs
rendar has quit [Changing host]
rendar has joined #ipfs
rendar has quit [Excess Flood]
rendar has joined #ipfs
rendar has quit [Changing host]
rendar has joined #ipfs
rendar has quit [Excess Flood]
rendar has joined #ipfs
BillBarnhill has joined #ipfs
<BillBarnhill>
The two main languages for IPFS development seem to be Go and Javascript. Are the two implementations about equal in features, or is one more worked on?
<BillBarnhill>
I am just learning IPFS now, but eventually I'd love to work on a Clojure implementation. Down the road though
<AphelionZ>
that'd be cool
<AphelionZ>
the more the merrier, so long as they work :D
<BillBarnhill>
Agreed
<BillBarnhill>
Another question just occurred..max size of a IPFS chunk is 256k, but what is the min size? Put another way, how bad is the storage overhead if I am storing tweets with one address per tweet?
muvlon has quit [Quit: Leaving]
kaotisk has quit [Ping timeout: 240 seconds]
espadrine_ has quit [Ping timeout: 256 seconds]
graffen has quit [Quit: Changing server]
<BillBarnhill>
Going through the commit logs I can answer my own question a little. Go impl seems to see much more traffic than Javascript impl. I'd still love to see a matrix of IPFS features with a different column for the different impls, if anyone knows of one
mtodor has quit [Remote host closed the connection]
<BillBarnhill>
Hmm. I am seeing a bunch of Hello World level examples for IPFS in Javascript, but having trouble finding that for Go. Ideally I'd like a barebones project that (a) connects to the local IPFS daemon, (b) stores the contents of a Map and saves hash (ideally using Cidv1/CBOR), (c) resolves hash, (d) extracts CBOR data back into a Map. Anyone know where I could find that?
<BillBarnhill>
ipfs-go commands are well written (from what I can tell as a Go novice), but have a fair bit of Command abstractions involved. I think I could trace it eventually, but it'd be much easier with a starter Hello World project
<BillBarnhill>
Have to go afk, will be back in 30
<Magik6k>
BillBarnhill: Go-level api for go-ipfs is in progress, should be out in next version(s)
whyrusleeping has quit [Changing host]
whyrusleeping has joined #ipfs
<BillBarnhill>
Oh, ok. I thought there was already a Go API for it. So the APIs then are CLI, javascript, and the python one that's being worked? Is it possible to find the Go-API work on github currently, so I can start playing with what's there so far (I know it will likely change before it's released)?
<Magik6k>
There is also the http api (which CLI uses), API is currenly worked on in https://github.com/ipfs/go-ipfs/tree/master/core/coreapi, but will soon be extracted (and there is no nice constructor for it now). there is also https://github.com/ipfs/go-ipfs-api which works on top of http API, but it will be rewritten to use coreapi interface so it may break.
espadrine_ has joined #ipfs
Ecran has joined #ipfs
chriscool1 has joined #ipfs
reit has quit [Ping timeout: 255 seconds]
BillBarnhill has quit [Ping timeout: 260 seconds]
Rustyshack has quit [Read error: Connection reset by peer]
Rusty78 has joined #ipfs
appa_ has quit [Ping timeout: 256 seconds]
BillBarnhill has joined #ipfs
<BillBarnhill>
@Magik6k: Ok, thank you. Those link are good stuff
frog_ has joined #ipfs
espadrine_ has quit [Ping timeout: 256 seconds]
raynold has joined #ipfs
Fess has joined #ipfs
Alpha64_ has quit [Ping timeout: 255 seconds]
vyzo has quit [Ping timeout: 264 seconds]
vyzo has joined #ipfs
Alpha64 has joined #ipfs
Fess_ has joined #ipfs
Fess has quit [Ping timeout: 255 seconds]
witten has joined #ipfs
Fess__ has joined #ipfs
droman has joined #ipfs
Fess_ has quit [Ping timeout: 255 seconds]
Encrypt has quit [Quit: Quit]
BillBarnhill has quit [Ping timeout: 260 seconds]
Fess__ has quit [Quit: Leaving]
maxzor has joined #ipfs
noresult has quit [Quit: leaving]
noresult has joined #ipfs
Rusty78 has quit [Read error: Connection reset by peer]
Rusty78 has joined #ipfs
rendar has quit []
vsimon has quit [Read error: Connection reset by peer]
shizy has joined #ipfs
talonz has quit [Remote host closed the connection]
talonz has joined #ipfs
talonz has quit [Remote host closed the connection]
Mateon3 has joined #ipfs
[BFG] has joined #ipfs
Mateon1 has quit [Ping timeout: 256 seconds]
Mateon3 is now known as Mateon1
frog_ has left #ipfs ["Leaving"]
Spakman has joined #ipfs
Spakman has quit [Client Quit]
Spakman has joined #ipfs
Spakman has quit [Client Quit]
Spakman has joined #ipfs
vsimon has joined #ipfs
Rustyshack has joined #ipfs
Rusty78 has quit [Read error: Connection reset by peer]
dgrisham has joined #ipfs
ONI_Ghost has quit [Ping timeout: 256 seconds]
espadrine_ has joined #ipfs
raynold has quit [Quit: Connection closed for inactivity]
TUSF has quit [Ping timeout: 256 seconds]
masonforest has joined #ipfs
ylp has quit [Quit: Leaving.]
<masonforest>
Hey everyone 👋 thanks so much for building libp2p! Decentralize the world 😀
<masonforest>
It looks like it’s failing when it tries to dereference `conn.smuxConn.IsClosed()` ? Happy to file and issue or dig into it but I thought I’d ask in here to see if anyone had any clues to get me started.
TUSF has joined #ipfs
talonz has joined #ipfs
Fess has joined #ipfs
Adbray has quit [Read error: Connection reset by peer]
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
raynold has joined #ipfs
shizy has quit [Ping timeout: 276 seconds]
talonz has quit [Remote host closed the connection]
talonz has joined #ipfs
plexigras has quit [Ping timeout: 256 seconds]
plexigras has joined #ipfs
vsimon has quit [Remote host closed the connection]
shizy has joined #ipfs
<Monokles>
Does anyone know if there exist some sort of "balanced" dag structures (i.e. path from root to node is as short as possible for every node added?
<DuClare>
I'm not sure I understand..
<DuClare>
Balance is not so much a quality of the structure
<DuClare>
As it is of the way you add data to it (or what data you add)
igorline has quit [Ping timeout: 240 seconds]
<DuClare>
(And whether you can modify the graph to rebalance it)
<Monokles>
Ehh, I'm having random late-night thoughts so it might not be worth pursuing haha, but I was looking for some sort of balanced tree structure where each node is allowed to have more than one parent
<DuClare>
It's the operations you do that give a tree balance
<DuClare>
A tree with given structure can be either balanced or not
<Monokles>
I'm not sure I understand you now...
<DuClare>
Balancing is kinda like sorting, with hierarchy
<DuClare>
So consider a linked list
<DuClare>
It can be sorted or not
<DuClare>
It's still a linked list
<Monokles>
yeah but it can be done through e.g. the insert operation in a tree (like with red black trees), and usually those operations are part of the data structure definitions right?
<DuClare>
Well I guess you could see it that way. To me a binary tree is still a binary tree whether you keep it sorted or not.