infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
spilotro has quit [Ping timeout: 268 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
rovdyl has joined #ipfs
chatter29 has joined #ipfs
<chatter29>
hey guys
<chatter29>
allah is doing
<chatter29>
sun is not doing allah is doing
<chatter29>
to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
chatter29 has quit [Client Quit]
infinity0 has joined #ipfs
tilgovi has joined #ipfs
<dsal>
That guy's been spamming every channel I've been in for a long time, but never sticks around to explain wtf that's supposed to mean.
<SchrodingersScat>
it's a muslim meme
asyncsec has joined #ipfs
<SchrodingersScat>
sometimes they list what is not doing and what their god is doing
chris613 has quit [Quit: Leaving.]
spilotro has joined #ipfs
<SchrodingersScat>
freenode of all places should especially not care
<dsal>
freenode is not doing
john2 has quit [Ping timeout: 268 seconds]
Boris_lven has joined #ipfs
arpu has quit [Ping timeout: 255 seconds]
<SchrodingersScat>
dsal: lol
asyncsec has quit [Quit: asyncsec]
jkilpatr has quit [Ping timeout: 260 seconds]
tilgovi has quit [Ping timeout: 240 seconds]
twiscar has joined #ipfs
asyncsec has joined #ipfs
intern has quit [Ping timeout: 240 seconds]
arpu has joined #ipfs
rcat has quit [Remote host closed the connection]
asyncsec has quit [Quit: asyncsec]
dimitarvp has quit [Quit: Bye]
Boris_lven has left #ipfs ["Leaving"]
spilotro has quit [Ping timeout: 240 seconds]
spilotro has joined #ipfs
jaboja has joined #ipfs
stoopkid has joined #ipfs
chris613 has joined #ipfs
<stevenaleach>
Thanks, with either repo gc or --enable-gc, all non-pinned content would be lost, correct?
jsgrant_om has joined #ipfs
<rovdyl>
yes pinned content is safe
<stevenaleach>
I'm planning on a script which checks a file, ipfs_pin_list which each user will need to maintain in their home directories, and checks current pins against a previous-run list, un-pinning anything not in a user's list and present on both prior run and current lists, then runs repo gc if and only if the disk is nearly full - does this sound reasonable or is there a better way?
tilgovi has joined #ipfs
<stevenaleach>
That is, I need the drive to be fully devoted to being an ipfs cache but it is a limited resource on a semi-public system (hackerspace) where any user can, of course, pin content and will likely pin things they don't need later and have forgotten. If each user has a file listing hashes to keep pinned on cleanup, they only need to keep that current while the script would unpin pinned content not listed in a user's file
<stevenaleach>
and which was present on the last run as well (so that things newly pinned won't be deleted that a user hasn't listed as a needed resource)
<deltab>
why allow pinning then?
neuthral has quit [Ping timeout: 260 seconds]
<deltab>
I don't see the advantage of having a second list of pins
<deltab>
what's to stop people leaving unneeded things in there?
<stevenaleach>
Because garbage collection is going to have to be run fairly regularly - my hope is that data that groups are working on (pinned or not) will usually be on disk and fast to access so long as it is still being used - but each time GC is run, if my understanding is correct, all non-pinned content will be cleared.
<deltab>
how about GC that only removes the oldest, least-used, most-available content?
neuthral has joined #ipfs
dignifiedquire has quit [Quit: Connection closed for inactivity]
<stevenaleach>
That's what I was hoping the daemon did, but it seems (again please correct me if I'm wrong) as it doesn't yet. If the daemon clears out older cached content that was never pinned, just files read by someone and thus in the cache, while keeping the newer cache when it needs room for newly added content or to cache new reads, then I should almost never need to run GC manually and wipe out all that isn't pinned. The
<stevenaleach>
purpose of the pin list is to allow users to have an editable file they can update when replacing sets with updated versions and such, and allows for the pinning of things that must not become unavailable so that jobs can rely on them (garbage collection won't clear them until the script has run twice and the interval will be shorter than any job's runtime). This is for a shared machine-learning rig with a batch
<stevenaleach>
scheduling system, so what's supposed to be there must be there ;-)
neuthral has quit [Ping timeout: 240 seconds]
neuthral has joined #ipfs
<stevenaleach>
If the daemon does clear older unpinned content (least recently accessed) then the script only needs to unpin stuff not listed in the user's file that's been there a while...
<deltab>
that's not implemented yet, afaik, but it would be nice
<stevenaleach>
Cool, that's what I needed to know. I'll just go with gc at a threshold combined with the unpinning of older pins not listed in the user file. I'm sure the daemon will do access-time based GC on the cache eventually and then it will just function as a method for older pins to expire unless listed as permanent.
<jbenet>
hey charlienyc[m] -- still around?
<jbenet>
charlienyc[m] ping me or flyingzumwalt when you're back. or grab us by email
mildred2 has quit [Read error: Connection reset by peer]
mildred3 has joined #ipfs
Foxcool has joined #ipfs
sirdancealot has joined #ipfs
palkeo has quit [Quit: Konversation terminated!]
tilgovi has quit [Quit: No Ping reply in 180 seconds.]
tilgovi has joined #ipfs
Foxcool has quit [Ping timeout: 240 seconds]
rendar has joined #ipfs
btmsn has joined #ipfs
TUSF has joined #ipfs
<TUSF>
I'm making an app using IPFS and it's making use of IPNS to host a database that updates periodically. What would be the best way to host this database without requiring the end user to download the entire database every time there's an update?
john2 has quit [Ping timeout: 240 seconds]
<TUSF>
I'm thinking, having the database be a collection of JSON files separated by the date/week/month of each entry, with new updates only being added to the latest file. At least this way users don't need to download the entire database at once
<TUSF>
But I'm not sure if this is really the most efficient way
<Stskeeps>
TUSF: look into CRDTs perhaps
tilgovi has quit [Quit: No Ping reply in 180 seconds.]
tilgovi has joined #ipfs
tilgovi has quit [Ping timeout: 246 seconds]
stevenaleach has quit [Quit: Leaving]
btmsn has quit [Quit: btmsn]
mildred has joined #ipfs
<TUSF>
Stskeeps: CRDT is interesting, but I get the impression this is for different databases updating concurrently. It's not what I want to use (not for this project anyways); in my case there's only one database that's updated locally and pushed to IPFS. I just don't want the entire DB to be downloaded every time, except for the first time... Honestly though, I would rather not have users download the entire thing before getting started
<TUSF>
anyways. Optimally, a user should be able to search the database without needing to download it...
btmsn has joined #ipfs
<Mateon1>
TUSF: Maybe use a blockchain? Have blocks of updates pointing to previous blocks by their IPFS hashes
talonz has joined #ipfs
<Mateon1>
For searches, it might be possible to do something clever, and only use some of the later blocks in the blockchain, but that really depends on what sort of search you need
<Mateon1>
Also, look at the `ipfs dag` API
chungy has joined #ipfs
<r0kk3rz>
TUSF: the database will be chunked, unchanged chunks will not need to be redownloaded
<r0kk3rz>
you could also structure the data to split it into an index + data blocks
<TUSF>
The current database is an SQL one, though I only really want users to navigate through a single table which has a "name" row, as well as some other data. For the most part, users would be searching by the "name" of the entry.
<TUSF>
Also, I'm not quite sure I'm understanding `ipfs dag` correctly. The API documentation is pretty minimal.
<Mateon1>
Well, an SQL database will not deduplicate well across updates
<Mateon1>
TUSF: You can look at https://ipld.io/, the `ipfs dag` API is an implementation of IPLD
<Mateon1>
In short, it's an API that allows to represent JSON-like data in IPFS with links to other IPFS objects, in a compact binary representation
amosbird has quit [Ping timeout: 240 seconds]
Caterpillar has joined #ipfs
<rendar>
Mateon1: what you mean deduplicate well?
amosbird has joined #ipfs
<Mateon1>
rendar: If a person has an old version of the database, the newer version will not share a lot of blocks with the new one, forcing the person to download more data than necessary when they update.
<r0kk3rz>
hmm, i wonder if you could write a sparql interface to an IPLD tree
maxlath has joined #ipfs
<TUSF>
So, let's say I convert to a JSON array, with every row from the SQL table being a JSON object. Would IPFS handle updating that sort of database, seeing as the changes would sorta just be like appending a block to the end?
<r0kk3rz>
depends on what you mean by 'handle', but yeah that should work well
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
rendar has joined #ipfs
rovdyl has quit [Ping timeout: 260 seconds]
illogicalness has joined #ipfs
rovdyl has joined #ipfs
espadrine` has joined #ipfs
arkimedes has joined #ipfs
jungly has joined #ipfs
konubinix has quit [Read error: No route to host]
dignifiedquire has joined #ipfs
konubinix has joined #ipfs
rory has joined #ipfs
ecloud has quit [Ping timeout: 268 seconds]
Foxcool has joined #ipfs
john2 has joined #ipfs
maxlath1 has joined #ipfs
ecloud has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
maxlath1 is now known as maxlath
rory has left #ipfs [#ipfs]
arkimedes has quit [Ping timeout: 240 seconds]
<TUSF>
So after some thinking, my idea so far is to have each entry in the database be a JSON object (or maybe a CSV row?). I can store each json object as individual DAG objects, and have larger chunks that links each object. Large files in IPFS are already broken up into chunks when they're large enough, so I'd just have to replicate those chunks but with a smaller format... right? But I guess it's not that easy...
mahloun has joined #ipfs
bwerthmann has joined #ipfs
<Mateon1>
Yeah, I would prefer to store multiple entries in a single dag object in a list, because the overhead of hashes for small objects is significant. But I guess that depends on the size of a database row, if it's a blog post + metadata, it's probably worth it to pay 8 bytes of overhead per entry, but if it's something small, 8 bytes * num entries might be too large
Foxcool has quit [Ping timeout: 260 seconds]
<TUSF>
So question, when I use `ipfs dag get` on a larger file, and it turns a JSON object with a bunch of links. What's the "data" property that seems to have some repeating base64 number?
Foxcool has joined #ipfs
illogicalness has quit [Ping timeout: 240 seconds]
Betsy has joined #ipfs
sirdancealot has quit [Ping timeout: 260 seconds]
cxl000 has joined #ipfs
gmcabrita has joined #ipfs
stoopkid has quit [Quit: Connection closed for inactivity]
Foxcool has quit [Read error: Connection reset by peer]
mib_kd743naq has joined #ipfs
Foxcool has joined #ipfs
<mib_kd743naq>
folks, there is a new contributor who ( unlike myself ) is able to tackle the go-side of the unixfs being not-truly-usable
<mib_kd743naq>
please provide him the necessary design acknowledgement/support before he runs away, this feature is too important to not have
ShalokShalom_ has quit [Ping timeout: 240 seconds]
rcat has joined #ipfs
Foxcool has quit [Read error: Connection reset by peer]
Foxcool has joined #ipfs
toXel has quit [Remote host closed the connection]
toXel has joined #ipfs
traverseda has quit [Ping timeout: 268 seconds]
dimitarvp has joined #ipfs
bwerthmann has quit [Ping timeout: 255 seconds]
maxlath has quit [Ping timeout: 240 seconds]
traverseda has joined #ipfs
maxlath has joined #ipfs
fargo has joined #ipfs
shizy has joined #ipfs
<TUSF>
Is there a way to prevent certain API endpoints through HTTP?
john4 has joined #ipfs
Mateon3 has joined #ipfs
<TUSF>
Setting up CORS The way described here: https://github.com/ipfs/js-ipfs-api could allow malicious IPFS apps to pin or remove files to your ipfs, I think?
<TUSF>
Is there a way to only enable certain endpoints of the API?
btmsn1 has joined #ipfs
keks_ has joined #ipfs
neuthral_ has joined #ipfs
btmsn1 has quit [Ping timeout: 260 seconds]
Soulweaver has joined #ipfs
asyncsec has joined #ipfs
<achin>
if you were thinking about exposing your node publically, you could put a proxy in front of it and use it to disallow certain URLs
toXel has quit [*.net *.split]
rcat has quit [*.net *.split]
jkilpatr has quit [*.net *.split]
gmcabrita has quit [*.net *.split]
Betsy has quit [*.net *.split]
john2 has quit [*.net *.split]
rovdyl has quit [*.net *.split]
chungy has quit [*.net *.split]
btmsn has quit [*.net *.split]
mildred has quit [*.net *.split]
neuthral has quit [*.net *.split]
Mateon1 has quit [*.net *.split]
xelra has quit [*.net *.split]
shibacomputer has quit [*.net *.split]
omnigoat has quit [*.net *.split]
daviddias has quit [*.net *.split]
richardlitt has quit [*.net *.split]
keks has quit [*.net *.split]
Guest188158[m] has quit [*.net *.split]
Mateon3 is now known as Mateon1
<TUSF>
Makes sense. But what if I wanted people to access a site from their local gateway, while not exposing themselves to other potential dangers?
<TUSF>
Telling them "Just run these three lines before running your daemon" doesn't really feel right.
rcat has joined #ipfs
<achin>
i think most (maybe all?) of the read-only APIs are available via the gateway
<Magik6k>
You may try running integrated js-ipfs node if it fits your app.
mildred has joined #ipfs
toXel has joined #ipfs
rovdyl has joined #ipfs
jkilpatr has joined #ipfs
asyncsec has quit [Remote host closed the connection]
Soft has joined #ipfs
asyncsec has joined #ipfs
grumble is now known as 14WAA001L
richardlitt has joined #ipfs
daviddias has joined #ipfs
omnigoat has joined #ipfs
Guest188158[m] has joined #ipfs
shibacomputer has joined #ipfs
chungy has joined #ipfs
gmcabrita has joined #ipfs
omnigoat has quit [Max SendQ exceeded]
omnigoat has joined #ipfs
14WAA001L is now known as grumble
xelra has joined #ipfs
Boomerang has joined #ipfs
overproof has joined #ipfs
overproof has quit [K-Lined]
crankylinuxuser has quit [Disconnected by services]
conway has joined #ipfs
gmcabrita has quit [*.net *.split]
chungy has quit [*.net *.split]
shibacomputer has quit [*.net *.split]
daviddias has quit [*.net *.split]
richardlitt has quit [*.net *.split]
Guest188158[m] has quit [*.net *.split]
chungy has joined #ipfs
gmcabrita has joined #ipfs
Guest188158[m] has joined #ipfs
richardlitt has joined #ipfs
daviddias has joined #ipfs
shibacomputer has joined #ipfs
TUSF has quit [Ping timeout: 255 seconds]
Boomerang has quit [Ping timeout: 240 seconds]
fargo has left #ipfs ["WeeChat 1.7.1"]
Boomerang has joined #ipfs
MDead has joined #ipfs
MDude has quit [Ping timeout: 240 seconds]
MDead is now known as MDude
TUSF has joined #ipfs
bwerthmann has joined #ipfs
Guest225929[m] has joined #ipfs
bwerthmann has quit [Client Quit]
Foxcool has quit [Read error: Connection reset by peer]
Foxcool has joined #ipfs
ashark has joined #ipfs
btmsn has joined #ipfs
xelra has quit [Ping timeout: 240 seconds]
asyncsec has quit [Quit: asyncsec]
asyncsec has joined #ipfs
Boomerang has quit [Remote host closed the connection]
xelra has joined #ipfs
ylp has joined #ipfs
Foxcool has quit [Read error: Connection reset by peer]
Foxcool has joined #ipfs
Guest151890[m] has joined #ipfs
jfmherokiller[m] has joined #ipfs
<jfmherokiller[m]>
would it be possible to use ipfs as a kind of distrobuted cache?
Soulweaver has quit [Remote host closed the connection]
xelra has joined #ipfs
<r0kk3rz>
jfmherokiller[m]: yes. with an appropriate coordination layer
maxlath has joined #ipfs
<jfmherokiller[m]>
what i was thinking in a hypothetical way was using something like a http proxy and having the proxy regularly adding files to the ipfs repo
btmsn has quit [Ping timeout: 240 seconds]
jhand has joined #ipfs
Foxcool has quit [Read error: Connection reset by peer]
<dsal>
I'm a bit confused as to how ipns names work. How does one make /ipns/something ?
asyncsec has quit [Quit: asyncsec]
<reit>
ipfs name publish --help
<reit>
should help
<dsal>
I don't see where you actually name a thing, though.
<dsal>
It just ives you a hash -> hash mapping.
<lgierth>
the human-readable names are through dns
<TUSF>
IPNS doesn't let you name something arbitrarily. Instead it pins something to your unique hash that can be changed.
<dsal>
Oh. I thought I saw non-dns looking ones.
<lgierth>
there's IPNS itself: /ipns/QmSomeHash, and IPNS with a dnslink: /ipns/example.net which looks up the path from a TXT dns record
<dsal>
That makes a lot more sense, though.
<dsal>
Yeah. OK. Now I no longer wonder how those issues are resolved.
<therealklanni[m]>
So ipns hashes a public key right? i.e. an "identity", not data?
<lgierth>
there's gonna be more human-readable naming services in the future but right now it's just dns
<lgierth>
therealklanni[m]: yeah
<lgierth>
and that identity signs records which say which content hash to point to
<lgierth>
dsal: DAT for example has a similar mechanism for human-readable names: dns-over-https
<therealklanni[m]>
"dns" records
<dsal>
lgierth: which DAT?
<reit>
i assume namecoin or similar could eventually be used for readable names dsal
<dsal>
yeah, this makes sense. I thought there was something directly in the ipfs/ipns data that let me name something and was wondering how that was resolved.
<dsal>
I still don't fully understand the benefit of ipns. It seems a little convenient, but not essential.
<reit>
you need something like that if you want a mutable namespace
realisation has joined #ipfs
<reit>
otherwise you'd be limited to static sites
<dsal>
Sure, but if the human readable mappings are external anyway, that stuff can happen there.
<therealklanni[m]>
The benefit is you can give someone the ipns, and then you can update what it points to (i.e. a website you can update)
<therealklanni[m]>
Rather than send them a new ipfs address every time you want them to get your updated page
asyncsec has joined #ipfs
TUSF has quit [Ping timeout: 246 seconds]
<dsal>
Sure, I get that it's useful if you have the ipns hash. But if you don't, you have to look it up anyway, and the thing you look it up from could change. e.g., dns can point to ipfs directly instead of ipns.
<lgierth>
human-readable names are the the only entrypoint into a web ;)
<lgierth>
eeh *are not the only
<lgierth>
sometimes people just click a link
<lgierth>
or applications process stuff
maxlath has quit [Ping timeout: 240 seconds]
galois_d_ has joined #ipfs
ylp has quit [Read error: No route to host]
<dsal>
"web" is a good point, yeah
asyncsec has quit [Client Quit]
<lgierth>
the point of ipns is that it's cryptographically secure mutable pointers
<lgierth>
(about dat i'm looking for the link
<dsal>
Yeah. I think it made sense when you said "web"
<therealklanni[m]>
datproject.org
<dsal>
datprojectdoh
elkalamar has quit [Ping timeout: 240 seconds]
galois_dmz has quit [Ping timeout: 260 seconds]
reit has quit [Ping timeout: 264 seconds]
maxlath has joined #ipfs
<dsal>
What kind of key do I need to make a plain ipns entry?
realisation has quit [Ping timeout: 246 seconds]
<therealklanni[m]>
ipfs key --help
<therealklanni[m]>
Any key listed by `ipfs key list`
<lgierth>
yeah you can use one from ipfs key, by specifying it with ipfs name publish -k
TUSF has joined #ipfs
<dsal>
Oh, I see it there now. Sorry, somehow didn't notice it was specifying a type.
<therealklanni[m]>
So is IPRS supposed to be a way of creating records like dns for example? Seems to be a spec to create many type of "record" but maybe a human-readable name is one of the intentions of this spec. Anyone know?
<therealklanni[m]>
I don't know if it's meant to be part of, in conjunction with, a replacement for IPNS, or completely separate.
reit has joined #ipfs
jungly has quit [Remote host closed the connection]
nannal has quit [Ping timeout: 268 seconds]
ianopolous has quit [Read error: Connection reset by peer]
jkilpatr has quit [Ping timeout: 240 seconds]
nannal has joined #ipfs
jkilpatr has joined #ipfs
<dsal>
ipfs pin add -r /ipns/X is pretty nice.
<whyrusleeping>
dsal: it might not be doing what you thing it is
keks_ is now known as keks
<dsal>
I've got a bunch of crap pinned.
<whyrusleeping>
it resolves the mutable name down to the /ipfs/ hash and pins that
<dsal>
It *looks* like what I want.
<dsal>
Yeah, that's what I want.
<whyrusleeping>
ah, okay
<whyrusleeping>
it doesnt update the values if you update the entry though
<dsal>
What do you mena?
<TUSF>
Basically means that you need to run the same command whenever the IPNS hash changes
<whyrusleeping>
so if you set /ipns/FOO to X
<whyrusleeping>
and run ipfs pin add /ipns/FOO, it will pin X
rovdyl has quit [Ping timeout: 260 seconds]
<whyrusleeping>
then if after that, you change /ipns/FOO to point to Y, your node will still have X pinned, and won't update their pin automatically to Y
<dsal>
Yeah, I figured I could just do something like ipfs key list -l | awk '{print $1}' | xargs -I % -n 1 ipfs key /ipns/%
<dsal>
Except I need a place to put all my keys.
<dsal>
(on a remote node)
<dsal>
I need another key just for the thing that has all of the keys I care about. heh
<whyrusleeping>
and now we step into the other hard problem in computer science, key management
<whyrusleeping>
(which may actually just be a subset of the naming problem)
asyncsec has joined #ipfs
<dsal>
Do keys go away on their own?
<whyrusleeping>
'go away'
<whyrusleeping>
?
<dsal>
Like, does the publishing need to be refreshed?
Akaibu has quit [Quit: Connection closed for inactivity]
asyncsec has quit [Client Quit]
<whyrusleeping>
ah, yeah, they last about 24 hours
<whyrusleeping>
right now, your default key gets automatically republished it your daemon is running
<whyrusleeping>
but the extra ones generated through ipfs key still need to be wired up
mootpt has joined #ipfs
<dsal>
Hmm... So I just need to republish?
<mahloun>
Hi all, first question here, working an little project during my studies and I'm currently gaining insight about how ipfs works. Currently practicing with the command line I struggle with the command key. I can't resolve my own name for instance `ipfs name resolve` returns an error. Is it a bug or am I doing something wrong?
<dsal>
That works well enough, because I was planning to have a cron job to update this thing, anyway.
<dsal>
mahloun: depends on what you're doing
<TUSF>
Most IPFS commands don't work unless you have a daemon running
<mahloun>
TUSF: I've got the daemon running
<mahloun>
dsal: just having a daemon running and experiment the tool
<dsal>
mahloun: you didn't say what you typed. Can't tell you if you did it wrong
<TUSF>
`ipfs name resolve` returns the hash your name is pointed to. Meaning, it'll only work if you have something published on IPNS
keks[m]1 has left #ipfs ["Kicked by @appservice-irc:matrix.org"]
<mahloun>
dsal: sorry `ipfs name resolve` as it is written in the help
<mahloun>
TUSF: ok thanks
<dsal>
Oh, I thought you were trying to resolve some other hash.
<dsal>
Is the error confusing? (Not at a computer)
rovdyl has joined #ipfs
<mahloun>
dsal: except from the fact I didn't know that a name had to be published in order to be resolved (that's my fault), I'd say no.
<whyrusleeping>
mahloun: just curious so we can improve things, what docs did you look at?
<whyrusleeping>
where can we improve the docs to help make this clearer?
<dsal>
IPFS seems mostly straightforward, but a couple things don't work exactly the way I wouldn't made them.
<whyrusleeping>
dsal: yeah?
<mahloun>
just the command line one. I typed `ipfs name resolve --help` and the first example is about resolving its own name. I tried and got the error
<lemmi>
dsal: that double negation made me giggle :>
<dsal>
heh. I blame phone
<whyrusleeping>
mahloun: ah, thats a good point. we should mention that it requires a publish first
Encrypt_ has joined #ipfs
<dsal>
Secondary keys feel a little excessively secondary.
<whyrusleeping>
secondary keys? meaning the ones from ipfs key?
<mahloun>
dsal: glad to have helped
<dsal>
whyrusleeping: yeah. I made one named 'twitter' 'ipfs name resolve twitter' confuses it.
<whyrusleeping>
thats a good point...
<whyrusleeping>
I guess the reason for that is worries over namespace collision
<whyrusleeping>
but you can always explicitly say /ipns/KEY to reference a hash key (to distinguish it from a named key named after a hash)
<whyrusleeping>
alright, we will make that happen
Encrypt_ is now known as Encrypt
<dsal>
I'm victim of ipns caching at the moment. heh
<TUSF>
A bit of a consistency annoyance: Why is `ipfs key list` "list", while every other "list" like command is `ls`?
<dsal>
I put my twitter archive link here: http://ipfs.sallings.org/ and then learned that that doesn't work (for a good reason). Waiting for update.
<dsal>
yeah, 'ipfs key ls' just failed me a minute ago.
matoro has quit [Quit: WeeChat 1.7.1]
<mahloun>
another one why the verbose option of `ipfs key ls` is `-l` and not `-v`
<whyrusleeping>
TUSF: no good reason. It should probably be ls
matoro has joined #ipfs
<dsal>
I'm starting to think ipfs may not be perfect yet.
<whyrusleeping>
mahloun: thats to match 'ls'
<whyrusleeping>
yeah, list should definitely be ls...
<mahloun>
whyursleeping: ah yeah of course
sirdancealot has joined #ipfs
<dsal>
I think the -l form should be default. That's what I wanted when I went looking.
<dsal>
Whatever I want when I look for something should be the default.
<whyrusleeping>
yeah, theres an issue asking for that already. I think thats acceptable
<whyrusleeping>
lol
<whyrusleeping>
"IT SHOULD WORK THE WAY I WANT IT TO WORK"
<TUSF>
That's how computers ought to be
espadrine` has quit [Ping timeout: 260 seconds]
Foxcool has quit [Ping timeout: 240 seconds]
mildred has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
persecutrix has joined #ipfs
spermatocele has joined #ipfs
spermatocele has quit [Excess Flood]
m_anish has quit [Ping timeout: 240 seconds]
vivus has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
jsgrant_om has joined #ipfs
jkilpatr has quit [Remote host closed the connection]
<charlienyc[m]>
It worked yesterday and I have no other network problems.
<dsal>
How do you link to an ipfs thing? It's kind of annoying that /ipfs/x doesn't work in a hosted service, and //gateway.ipfs.io/ipfs/x seems wrong
<dsal>
Er, I was doing /ipns/, but same thing.
elkalamar has joined #ipfs
<dsal>
Would it be wrong for Host:-based things to understand /ipfs/ and /ipns/ as "absolute" ?
TUSF has joined #ipfs
<lemmi>
charlienyc[m]: looks like the dns record isn't setup correctly
<lemmi>
tr.wikipedia-on-ipfs.org. 3600INTXT"ALIAS for gateway.ipfs.io"
<lemmi>
this should include a hash IIRC
<victorbjelkholm>
charlienyc[m]: what's the full path you're trying to access?
<dsal>
should be in the form of "dnslink=/ipns/Qxxxxx"
<charlienyc[m]>
Can I just get a link? I'm trying to give distribute a URL on the ground in Turkey
<dsal>
The link would be whatever is supposed to be in that TXT record.
wak-work has quit [Remote host closed the connection]
<charlienyc[m]>
Can I just get a link? I'm trying to distribute a URL on the ground in Turkey. I'm on mobile.
<charlienyc[m]>
And so is everyone else in Turkey
<Kubuxu>
charlienyc[m]: it wasn't yet deployed
<charlienyc[m]>
Gotcha.
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
<Kubuxu>
charlienyc[m]: we got wiki foundation asking to do some things before we deploy it so please be patient
wak-work has joined #ipfs
<charlienyc[m]>
There was another url yesterday, but I cannot find it now. I can only find the Turkish one, and a significant minority dont speak Turkish. English/Arabic are, by far, the most popular languages, and I saw the issue to add Kurdish and Arabic to this project
<charlienyc[m]>
I'm patient, just didn't realize it wasn't live yet. Thanks so much for your hard work!
<Kubuxu>
We got English one in process to but it is much bigger.
<charlienyc[m]>
Like 14M links.
<charlienyc[m]>
If you can share, what is wmf asking of ipfs? I'm curious
<whyrusleeping>
charlienyc[m]: Hey, wanna email us? jbenet would love to chat
<charlienyc[m]>
I messaged on here. Can you pm me the email addresses? Googling is pretty impossible at 20kb/s
<Kubuxu>
use wikipedia-project@ipfs.io
<TUSF>
Oh? What's this wikipedia project? A mirroring project to get Wikipedia on IPFS?
<whyrusleeping>
TUSF: yeah
sprice1 has joined #ipfs
galois_d_ has quit [Remote host closed the connection]
<dsal>
Seems like a couple simple things could work there.
<timthelion[m]>
I have an ipfs gateway on ipfs.hobbs.cz which should be closer to turkey geographically than gateway.ipfs.org . Uplink is 300mbs , not sure if that suffices. But if/once the main gateway is blocked, feel free to switch to mine.
<timthelion[m]>
That was for you charlie...
DustinSwan[m] has joined #ipfs
john2 has joined #ipfs
iomonad has joined #ipfs
john2 has quit [Client Quit]
john4 has quit [Ping timeout: 240 seconds]
<charlienyc[m]>
timthelion: thanks. However, I'm guessing your gateway still does not have a full mirror in Turkish, Arabic, Kurdish, or English. Correct?
mmuller has quit [Ping timeout: 258 seconds]
<timthelion[m]>
Well the way IPFS works, is that my gateway would ask the network for the parts of the DAG that it needs in order to serve the pages that are requested.
<timthelion[m]>
those pages that get requested often will be cached
<timthelion[m]>
It is trivial to switch between gateways
<whyrusleeping>
dsal: yeah, file a bug for that
<dsal>
whyrusleeping: where?
<dsal>
regular ipfs?
<whyrusleeping>
dsal: uhm... yeah, ipfs/go-ipfs works
<dsal>
Not sure if it's spec or implementation
<whyrusleeping>
probably impl
<timthelion[m]>
But every time you do so, your cache is cleared, so everything will be slow for a time.
<timthelion[m]>
It is also possible to do pre-pinning. That is, ask several gateway operators to pin a dag.
<timthelion[m]>
And that way the cache would be warmed up before the switch occured.
<timthelion[m]>
Since I presume you'll be wanting to switch gateways over time as the govment blocks them.
<charlienyc[m]>
Correct.
<charlienyc[m]>
Or run local gateways with Vpns to...somehwere
<timthelion[m]>
The gateway operators job is really easy, as we actually don't have to do anything except, potentially cache warming (which requires having enough disk space...
<timthelion[m]>
The hard part is getting all the data into IPFS.
<timthelion[m]>
How much data is the "full DAG"?
<charlienyc[m]>
I have an old snapshot on an sd card, but I imagine the hash processing is the real bottleneck
<timthelion[m]>
That would be the contents of all the versions of wikipedia that you want to serve?
mmuller has joined #ipfs
<timthelion[m]>
hashing is fast.
<charlienyc[m]>
So, for the time being, can I just add that to ipfs on my laptop?
<timthelion[m]>
Yes.
<TheGillies>
Is there any formal relationship between ipfs and akasha?
<charlienyc[m]>
And how would I then serve it to local mobile users?
<timthelion[m]>
just run ipfs add /path/to/sd/card
<timthelion[m]>
and that will run in the time that it takes to read the data from disk and print out a hash at the end.
<timthelion[m]>
And if you have the IPFS daemon running, I'll be able to start pinning it imediately, but I'd need your laptop to be on the whole time ;)
<charlienyc[m]>
I can guarantee the laptop is on, but I cannot guarantee the power/internet won't drop
<charlienyc[m]>
In fact, I can pretty much guarantee it won't.
<timthelion[m]>
Its probably better not to use an old copy though, if a new one is available.
<charlienyc[m]>
The SD card transfers at 9Mb/s would it be worth it to move it to the sdd first?
<timthelion[m]>
but we really want it to work in the browser...
<Kubuxu>
we are just getting new machine for this purpose
<Kubuxu>
timthelion[m]: we are already doing that
<charlienyc[m]>
Can someone with a real connection take a dump and make a torrent?
<charlienyc[m]>
😀
<timthelion[m]>
Kubuxu: OK, once you have a hash, ping me and I'll pin it, so it'll be faster to switch gateways once the first one is blocked.
<Kubuxu>
sure
<Kubuxu>
also: if there are enough users in Turkey (and enough full dumps) and users use go-ipfs it should be quite hard to block
TUSF has quit [Ping timeout: 255 seconds]
<timthelion[m]>
Perhaps you should adulter the HTML to suggest to people that they should install go-ipfs
<Kubuxu>
We will do that
<Kubuxu>
that is one of the changes
<charlienyc[m]>
They are inspecting github traffic now too, so moving a mirror to gitlab that I can put on the ipfs boxes is a good ides
<charlienyc[m]>
Like, I can't git clone tor as of last night
<charlienyc[m]>
And any site that talks about vpns is blocked
<dsal>
Are there tools to inspect objects WRT: how "available" they are?
<charlienyc[m]>
Can you rephrase the question?
<therealklanni[m]>
WRT=with regards to?
<dsal>
charlienyc[m]: I've got a tool to do local git replication mirrors from upstream triggers and another tool that will store and forward triggers (and another tool that will synthesize triggers for other people's github repos).
<dsal>
Yes, sorry -- I want to know if a particular object exists outside of my laptop, and perhaps how many replicas there might be.
<therealklanni[m]>
That would be an interesting tool
<dsal>
I was hoping the stat tool would do something like this.
<dsal>
There's a way to locate an object, so it must be able to at least estimate availability by knowing a few sources.
Encrypt has joined #ipfs
<whyrusleeping>
dsal: ipfs dht findprovs <hash>
<dsal>
Wow, that returned a lot fast.
<dsal>
Thanks.
<dsal>
I guess the only thing I don't know at this point is whether they're pinned.
<therealklanni[m]>
What's the best way to "discover" things? Only by someone passing around a ipfs/ipns hash? Is it possible to just ls random peers and find objects?
<whyrusleeping>
therealklanni[m]: you could do `ipfs log tail` and watch for provider messages
<whyrusleeping>
thats always fun
jsgrant_om has quit [Ping timeout: 268 seconds]
bwerthmann has quit [Ping timeout: 240 seconds]
<therealklanni[m]>
Interesting
jkilpatr has quit [Ping timeout: 260 seconds]
davipirata[m] has joined #ipfs
ashark has quit [Ping timeout: 264 seconds]
mahloun has quit [Ping timeout: 240 seconds]
Guest226540[m] has joined #ipfs
Encrypt has quit [Quit: Quit]
asyncsec_ has joined #ipfs
asyncsec has quit [Ping timeout: 245 seconds]
mbags has quit [Remote host closed the connection]
jkilpatr has joined #ipfs
cwahlers has joined #ipfs
cwahlers_ has quit [Ping timeout: 264 seconds]
grumble has quit [Quit: Failure configuring Windows updates. Reverting changes. Do not turn off your computer.]
<timthelion[m]>
So how do I get ssl to work with it? Currently I'm just reverse proxying to ipfs from lighttpd...
<TUSF>
Pretty because it's behind a proxy
<timthelion[m]>
aha
<lgierth>
yeah it's nginx that does ssl for ipfs.io
<timthelion[m]>
this whole ssl thing isn't as simple as one would hope :/ No wonder the ssl nagging makes people craby...
<lgierth>
caddy does ssl pretty nicely though
<TUSF>
>isn't as simple as one would hope
<TUSF>
Why do you think no one does it directly?
<lgierth>
it'll just automatically fetch the certs when it starts the first time, and then keeps renewing them
<timthelion[m]>
In order to set up a letsencrypt cert for ipfs.hobbs.cz I need to serve the acme files using http. That is make it so that going to ipfs.hobbs.cz/.well-known/acme-challenge/... leads to a magic file. Currently, I can do that by stopping ipfs and rerouting ipfs.hobbs.cz to an http server. But I guess I'm going to have to change that, for certificate renawals, so that ipfs.hobbs.cz/ipfs/ leads to ipfs and
<timthelion[m]>
ipfs.hobbs.cz/.well-known leads to the http server.
<lgierth>
you'll have to put some reverse proxy in front of go-ipfs, if you wanna do ssl
<lgierth>
lighttpd, apache, nginx, caddy
<lgierth>
it's probably worthing adding good ssl support to go-ipfs in the future
<demize>
You could also alternatively do DNS validation if you for some reason don't want a reverse proxy.
<lgierth>
basically what caddy does, but i'm not sure how easily reusable that'd be
<lgierth>
demize: still need something to do the ssl ;)
<deltab>
I believe caddy is also usable as a library
<demize>
Sure, though not necessarily something like nginx.
xelra has quit [Ping timeout: 240 seconds]
<timthelion[m]>
well, for high perf, it is guaranteed to be possible to implement faster ssl in go-ipfs than outside of it as a proxy :/... That's because the symetric cyphers used in ssl are 1:1, that is, they can operate without a mem-copy. But that's only possible when doing encryption in process and not via a proxy.
<timthelion[m]>
But it's not clear to me that that's exactly the one bottle neck holding back ipfs right now ;)
btmsn has joined #ipfs
<timthelion[m]>
I'm pretty sure dev time would be better spent optimising that dht.
<lgierth>
yeah agreed
<lgierth>
i'm also not worried at all about ssl at the moment ;)
<timthelion[m]>
lgierth: I started thinking about it because of this Turkey project. I thought that it would be bad to offer my gateway as a backup when I haven