vapid has quit [Quit: No Ping reply in 180 seconds.]
vapid has joined #ipfs
<lgierth>
and /dns carries a lot more complexity when it comes to filtering addresses before a resolve() step. you have no idea if /dns is going to resolve to /ip4 or /ip6 or both. that means you don't know if a /dns address is actually supported by your libp2p transports until you've resolved it
fleeky__ has joined #ipfs
fleeky_ has quit [Ping timeout: 255 seconds]
anewuser has joined #ipfs
cwahlers_ has quit [Ping timeout: 245 seconds]
<mguentner>
lgierth: need to investigate further...this looks like a fundamental issue to me (after ~12k files the repo size explodes...as in e^x)
cwahlers has joined #ipfs
<lgierth>
ugh :/
arpu has quit [Remote host closed the connection]
espadrine_ has quit [Ping timeout: 245 seconds]
<mguentner>
lgierth: it's the gc as far as I can see
<mguentner>
so adding a lot of files without collecting garbage in between is not a good idea
tmg has joined #ipfs
suttonwilliamd has joined #ipfs
<lgierth>
no it's a bug, i was just asking about gc to see whether it's the actual objects that turn out way too big, or whether it's spurious unwanted objects
<lgierth>
mind filing an issue? :)
<lgierth>
adding a lot of files is a core part of ipfs so any disruption to that is bad
beyond has joined #ipfs
<beyond>
Hello
<mguentner>
lgierth: just wrote something that can be fed into matplotlib
<beyond>
I am just checking out ipfs and had a couple of issues with usage, errors such as "context deadline exceeded" when trying to view example files using the hash
<lgierth>
mguentner: would be awesome if you could post that with the issue, as inspiration for others posting issues -- we need more nice graphs in discussions :)
<lgierth>
(i mean post the code)
<lgierth>
beyond: "context deadline exceeded" means timeout -- so probably something's not connecting right
<lgierth>
mguentner: thanks for the whole day on HN btw :)
<beyond>
Yea it's a "path resolve error", so somethings not quiet right. I have created FW rules to allow traffic in on the respective TCP ports, but no change.
<lgierth>
ipfs swarm will show you whether you have peers. are you getting the error on ipfs.io when accessing something you added locally?
<mguentner>
lgierth: hehe - I hope that IPFS and NixOS attract more attention and developer time (both project need it) that way
<beyond>
Give me a min, I am removing the install and will re-install using the given install script, maybe* installing manually failed to complete properly
rendar_ has joined #ipfs
bastianilso_ has quit [Quit: bastianilso_]
rendar has quit [Disconnected by services]
rendar_ has quit [Client Quit]
rendar has joined #ipfs
<nicolagreco>
lgierth: fun, ericson and I had a long github conversation on exactly that, I connected the dots yesterday
<lgierth>
cool cool! make sure to ping them :)
<nicolagreco>
yes!
<beyond>
Running a daemon, is there a way to NOT have a term window open all the time, while the daemon is running? Closing the term window shuts down the daemon
<SchrodingersScat>
screen?
<mguentner>
tmux?^^
<SchrodingersScat>
a screen inside a tmux?
<beyond>
gnome term
<SchrodingersScat>
that dropdown menu one?
<beyond>
its a default term that came installed, supposed their is better?
chris613 has joined #ipfs
<beyond>
But is not the behavior the same? Close window = process killed
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
cemerick has joined #ipfs
crossdiver has joined #ipfs
diffalot_ has quit [Ping timeout: 240 seconds]
ashark has quit [Ping timeout: 248 seconds]
ashark has joined #ipfs
cemerick has quit [Ping timeout: 248 seconds]
anewuser has quit [Read error: Connection reset by peer]
anewuser has joined #ipfs
<beyond>
When adding a directory (add-r) does this option automatically add new files as they are added to the directory?
<beyond>
So could I add files automatically by creating them in the directories I added?
<beyond>
Also, can I give write access per-directory?
<achin>
no, when you use "ipfs add -r <directory>", every file in that directory is added and then you get an ipfs hash that is a snapshot of that directory
<achin>
if you change any of the files, you can run "ipfs add -r <directory>" again, and get a new hash
<beyond>
ok, that makes sense.
<beyond>
What are some of the ways ipfs is being used by others now that it's released?
<lgierth>
twitter.com/ipfsbot actually is a good list of that ;)
<lgierth>
i just repost most of what i find
<beyond>
I am having problems viewing the contents of other nodes, using the id and the gateway.ipfs.io website..
<beyond>
It never seems to resolve the address
structuralist has quit [Remote host closed the connection]
<beyond>
Here is why i am looking at ipfs.... distributing files (text) which are controversial in nature (leaks/info/commentary), yet I want to do so in a way that is permanent. A "blog" of sorts with daily updated additions (hence the need to update content frequently), with the end goal being to circumvent the censorship of political and spiritual content that falls outside the BS the media and "tptb" want us to believe.
<achin>
content on IPFS is only permanent as long as at least 1 person is committed to hosting the data
LowEel has quit [Ping timeout: 276 seconds]
wkennington has joined #ipfs
structuralist has joined #ipfs
<beyond>
ahh... ok. I'm still playing with it, mostly troubleshooting so far. Somethings quirky with the install I think
<achin>
let us know if there's something we can help with
structuralist has quit [Remote host closed the connection]
<beyond>
Ok, I have the id of a peer that is listed when I "swarm peers". I paste that id/hash into the end of gateway.ipfs.io, and the address is never resolved.
<beyond>
Am I just misinterpreting the usage, or should* that bring up the same screen / folder list as it does when I use my own id/hash?
<Mateon1>
beyond: You're probably trying to resolve the Peer ID as an /ipfs hash, that won't work. All nodes should, however have a /ipns ID. Just replace /ipfs/ with /ipns/ in the URL and it should work
<Mateon1>
Spoiler: Most IPNS entries are empty directories
<beyond>
Mateon, trying that now
<beyond>
"Path Resolve error: Could not resolve name"
<Mateon1>
There are very rare cases where the republisher for IPNS gets stuck, it might be that
<Mateon1>
I had that happen once
<achin>
note that IPFS doesn't let you browse the data an ipfs node is hosting (unless of course that node is your own)
<Mateon1>
achin: Yep
<Mateon1>
IPNS is only a single pointer which can be set to any hash.
<Mateon1>
beyond: If you do `ipfs swarm peers` and select a few random peer IDs, you can find some cool blogs or content dumps
<beyond>
achin, ok yea I see
LowEel has joined #ipfs
<beyond>
Mateon, that's about all I am looking for. I just misunderstood the implementation of ipfs
<beyond>
But Mateon, when I "select" the peer, I unable even still to ipns/ipfs the peer @ gateway.ipfs.io
<achin>
depending on the peer, that's normal
<beyond>
ok, tryin more.
<achin>
i actually don't recommend that method to find content. best to checkout the link from lgierth or something like https://github.com/ipfs/archives
<SchrodingersScat>
how about the ipf-search?
<SchrodingersScat>
ipfs-search
<Mateon1>
Also, look at ipfs.pics
<beyond>
I see how I can still use ipfs, but not how I originally thought. There is no method of "distribution/advertisement" per-say, just "linking" to the file by a hash that is already known
<Mateon1>
I mostly use IPFS for sharing content quickly to other people, like screenshots
<beyond>
and must be already known, correct?
<Mateon1>
I do `ipfs add file.png`, copy the hash, and share a gateway link. Works pretty well
<beyond>
ok cool
<SchrodingersScat>
does ipfs name publish take a while? or is mine hanging?
<Mateon1>
Well, there are some performance issues with IPFS publish. If it doesn't finish within 1 minute try restarting your daemon
<Mateon1>
s/IPFS/IPNS
<SchrodingersScat>
oops, helps if I use it correctly...
<SchrodingersScat>
hashes make me sad, how am I going to remember this?
<Baffy[m]>
is the data already saves? like, you just have to move it into ipfs
<SchrodingersScat>
you didn't load every file one by one?
structur_ has joined #ipfs
<Mateon1>
SchrodingersScat: You can add a TXT record to a DNS name if you own a domain.
<Mateon1>
It's _dnslink=/ipns/QmBlah
<Mateon1>
(or /ipfs/QmBlah)
anewuser has quit [Quit: anewuser]
<Mateon1>
Then you can resolve /ipns/your.domain.com
<SchrodingersScat>
true, neat
<SchrodingersScat>
oh, that's even better
structuralist has quit [Ping timeout: 255 seconds]
<Mateon1>
Also, if you point the A/AAAA records at an IPFS gateway, it will automatically find the host name, resolve the DNS link, and serve the proper hash at your.domain.com
<Mateon1>
(in my understanding, I can't try that as I don't own any domains)
<SchrodingersScat>
some are like $0.88 for the first year :^)
dignifiedquire has quit [Quit: Connection closed for inactivity]
<SchrodingersScat>
can you even divide a btc into .88$ now?
<Mateon1>
That's cool
<Mateon1>
SchrodingersScat: Yes, there are 10m indivisible units within a BTC IIRC
<Mateon1>
Currently 1 BTC ~= 1000 USD. It's convenient to use mBTC = 1/1000 BTC
ashark has quit [Ping timeout: 240 seconds]
<SchrodingersScat>
neat, it is going back up
<SchrodingersScat>
Mateon1: are you a btc millionaire?
arkimedes has quit [Ping timeout: 240 seconds]
<Mateon1>
Nope, unfortuantely
<Mateon1>
I mined some coins on a GPU in 2012, but lost the wallet
<SchrodingersScat>
k, I was going to ask to '''borrow''' some...
<SchrodingersScat>
should have put it in ipfs :(
<SchrodingersScat>
lgierth: are there any plans to make ipfs more of a transparent proxy? Seems to switch over I'd need to get both ipfs and things off network?
<Mateon1>
SchrodingersScat: Do you mean, as in, "archive existing things on the net and/or fetch from IPFS"? If so, that's technically infeasible.
<Mateon1>
If you mean, redirect requests for ipfs.io to the local gateway, there exist browser extensions for that
<SchrodingersScat>
I mean the first, yes.
<SchrodingersScat>
why can't my ipfs node fetch things like a socks proxy and cache them in case another ipfs user needs them?
<Mateon1>
In short: It's impossible to know whether something is dynamic content or not, and whether some info depends on whether you're logged in, etc. This idea is hard bordering on impossible.
<lgierth>
existing cdns would be great to incorporate such a feature: for every page served, add it to ipfs and add its hash as an http header
<Mateon1>
Yep, but that's just for static content
<lgierth>
and have the ipfs browser addons optionally do something with that header
<lgierth>
the cdns have a pretty good idea though
<SchrodingersScat>
I see, so it would be more on the cdn people to switch over to an ipfs system?
<Mateon1>
Also, requires work on the part of the CDN
<lgierth>
sure
<lgierth>
the cloudflare people who presented at 33c3 liked ipfs so that's a start :P
<Mateon1>
It seems that IPFS obsoletes CDNs anyway, there will only really be pinning services spread over the world :P
<SchrodingersScat>
lets hope
<lgierth>
SchrodingersScat: not switch over. support both
<lgierth>
that's the "upgrade path". how to integrate at the various layers of the existing systems in a way that they interoperate seamlessly, and let the existing thing become obsolete over time
<Mateon1>
lgierth: We know how that turned out with IPv6
<SchrodingersScat>
right, i remember that part, I'm not clever though, I need time.
<SchrodingersScat>
the last ipv4 address was assigned in 1994
<lgierth>
Mateon1: ipfs is not designed by committee
<Mateon1>
True
<SchrodingersScat>
it's a dictatorship?
<FrankPetrilli>
Disagree on that last bit. IPv4 allocations are still being handed out by RIRs.
<FrankPetrilli>
ARIN will grant small prefixes, AfriNIC has a good bit of space if you're registered in Africa.
<lgierth>
and a /24 costs like $4000 at the moment
<SchrodingersScat>
soon you'll only be able to get backstreet, underground ip addresses
<FrankPetrilli>
Yeah. Meanwhile a /32 costs $1000 from ARIN. I'm 100% v6 where I can be, trust me.
<FrankPetrilli>
You can do what so many shady places do and just squat on dead prefixes.
<lgierth>
FrankPetrilli: have you heard of cjdns yet? it has a cryptographic approach to address space (hash-of-pubkey = fc00::/8 addr)
<FrankPetrilli>
I'm very aware of CJDNS. :)
<Mateon1>
I wish it was easier to get a private IPv6 block as a private individual (and student :P)
<FrankPetrilli>
Mateon1: Are you familiar with he.net's tunnels?
Akaibu has joined #ipfs
<Mateon1>
FrankPetrilli: Not exactly
<lgierth>
mguentner: awesome thanks -- could you also post the commands to reproduce it? it's okay if it's just the respective ipfs add commands
<FrankPetrilli>
https://tunnelbroker.net Here. Free /64 automatically, a /48 once you click a button.
<mguentner>
lgierth: just read the README inside the hash
<FrankPetrilli>
Mateon1: It's tunnelled, so there's some extra latency, but it's HE.net, who runs one of the largest transit networks in the world, so they have a tunnel endpoint close to you.
eaterof has quit [Read error: Connection reset by peer]
<Mateon1>
FrankPetrilli: That's cool. I currently have very crippled IPv6, so I might try this out
<lgierth>
mguentner: ooh i missed that last line somehow -- awesome
eater has joined #ipfs
<FrankPetrilli>
Mateon1: Yeah, it's a solid solution if you don't have native v6.
<lgierth>
FrankPetrilli: cool let me know whenever you need anything about cjdns, i contribute to it semi-regularly (openwrt and android ports are mine)
<Mateon1>
Well, I have native IPv6, except all incoming traffic is blocked at the ISP.
<FrankPetrilli>
legierth: Awesome, great to know! :)
<Mateon1>
I can use websites over IPv6, but I can't connect from the outside
<FrankPetrilli>
Mateon1: ... Jesus why. Support might be able to open it up. I can see why they default block given Mirai, etc.
<Mateon1>
Their support is quite crap, I don't wish to deal with them really
<Mateon1>
Resolving an actual "somebody cut a wire in the ground, and the fix caused massive intermittent packet loss" issue was extremely painful
<bilge_emulsion[m>
That's nuts.
<SchrodingersScat>
guerrilla isping
<bilge_emulsion[m>
What is the point of putting in IPv6 outbound only?
<FrankPetrilli>
No NAT required for many sites, at least.
<bilge_emulsion[m>
NAT on the site's routers you mean?
<FrankPetrilli>
We're reaching a point with v4 exhaustion where Carrier-Grade NAT is required for many ISPs.
<FrankPetrilli>
At least with unidirectional v6, you've avoided the ridiculous CPU load for like 10% of total traffic, and like 50-60% of your bulk traffic (Netflix has great v6 support)
<SchrodingersScat>
come to think of it, I don't host anything from home, I simply have a reverse tunnel to a more reliable line in case I need anything from home.
<Mateon1>
Well, CJDNS works as my tunnel, but out-only IPv6 (and terrible IPv4 NAT) reduces peering significantly
<FrankPetrilli>
lgierth and Mateon1: Curious if you've heard of dn42. Only slightly related, but an interesting rabbit hole as well.
noffle has joined #ipfs
<Mateon1>
Not yet, but seems interesting. It's connected to Freifunk, isn't that the original Batman network?
<noffle>
hey ipfstronauts o/
<SchrodingersScat>
\o
<FrankPetrilli>
Yeah, to a degree. DN42 is a mostly VPN-only network though, run by people playing with large-scale BGP in a way that would normally get you nastygrams and named/shamed on NANOG.
<lgierth>
also meet kyledrake who run his own BGP and anycast for neocities
nausea has joined #ipfs
nausea has joined #ipfs
nausea has quit [Changing host]
<FrankPetrilli>
Ahh, good to hear! I saw a discussion from kyledrake a month or so back I think, about doing anycast for cheap via VPSes?
<lgierth>
SchrodingersScat: always make sure you're allowed to redistribute the things you add -- i think xkcd is okay to redistribute
<SchrodingersScat>
lgierth: i thought it was on the archive pages somwhere >_>
<FrankPetrilli>
It looks like Creative Commons attribution.
<lgierth>
FrankPetrilli: yeah i think so too :) was just a friendly note
<lgierth>
oops i meant SchrodingersScat ^
<SchrodingersScat>
FrankPetrilli: it was for science, I'm sure he'll understand >_>
<FrankPetrilli>
SchrodingersScat: I think you're good with CC Attribution. It's clarified below as "This means you're free to copy and share these comics (but not to sell them)"
* SchrodingersScat
kicks away his lucrative deal to sell knockoff xkcd comics on the ipfs market
<mguentner>
lgierth: could please tell the pinbot to pin QmcsrSRuBmxNxcEXjMZ1pmyRgnutCGwfAhhnRfaNn9P94F? I am not sure how long I can pin this on my test setup
<lgierth>
!pin QmcsrSRuBmxNxcEXjMZ1pmyRgnutCGwfAhhnRfaNn9P94F go-ipfs/ipfs#3621 graph and scripts
<pinbot>
now pinning /ipfs/QmcsrSRuBmxNxcEXjMZ1pmyRgnutCGwfAhhnRfaNn9P94F
<SchrodingersScat>
has anyone tried hooking up ipfs to an '''unlimited''' cloud option and see how much they could store in it?
<Mateon1>
Most cheap "unlimited" cloud options are not really unlimited, and their TOS are very restrictive. You also don't get a shell, which is needed to install and run IPFS
Mizzu has quit [Quit: WeeChat 1.6]
<SchrodingersScat>
Mateon1: right, that's why the heavy quotations on unlimited, thinking of amazon there. if a FUSE option exists then I was thinking of mounting the '''cloud''' space using something like the google-drive-ocamlfuse mounted as the ~/.ipfs/ or possibly even only the ~/.ipfs/blocks/ depending on if you need/want the rest in a cheap cloud option
<SchrodingersScat>
2017 resolution, to have to send an email to amazon, "No, don't delete my account, it hosts the internet D:"
maxlath has quit [Ping timeout: 248 seconds]
anewuser has quit [Quit: anewuser]
pfrazee has joined #ipfs
cemerick has joined #ipfs
cemerick has quit [Ping timeout: 248 seconds]
<SchrodingersScat>
that feel when there's no nodes in antarctica
tclass has quit [Remote host closed the connection]
Mizzu has joined #ipfs
pfrazee has quit [Remote host closed the connection]
espadrine_ has quit [Ping timeout: 248 seconds]
structuralist has quit [Ping timeout: 256 seconds]
vapid has quit [Read error: Connection reset by peer]
vapid has joined #ipfs
dryajov has joined #ipfs
dryajov has quit [Client Quit]
cemerick has joined #ipfs
dryajov has joined #ipfs
arkimedes has joined #ipfs
dryajov has quit [Client Quit]
mildred4 has joined #ipfs
dryajov has joined #ipfs
dryajov has quit [Client Quit]
mildred has joined #ipfs
apiarian has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
mildred4 has quit [Ping timeout: 240 seconds]
wak-work has quit [Ping timeout: 240 seconds]
apiarian has joined #ipfs
mildred has quit [Ping timeout: 240 seconds]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
wak-work has joined #ipfs
Caterpillar has joined #ipfs
maxlath has quit [Ping timeout: 256 seconds]
espadrine_ has joined #ipfs
arpu has quit [Ping timeout: 245 seconds]
<gde33>
I had this hillarious idea: Make a multiplatform boilerplate application that does nothing(!?) It starts with the OS then shuts down again and has automatic updates. Then publish and promote it on all the usual channels as if $projectName. App stores, gnome software center, post it on hackers news, reddit, etc
mildred has joined #ipfs
<gde33>
Then gradually add things to it. An about popup, project status log, links to websites repo's and bug trackers.
<gde33>
a gui of course
tclass has quit [Remote host closed the connection]
perrier-jouet has left #ipfs [#ipfs]
<gde33>
then one can have the nerd grade impossible instalation process without pretending there are no users.
tangent128 has quit [Quit: ZNC 1.6.2 - http://znc.in]
<gde33>
if there are some thousands of paceobo ipfs users one might be tempted to run something ipfsish on those nodes
G-Ray has quit [Quit: G-Ray]
maxlath has joined #ipfs
arpu has joined #ipfs
tangent128 has joined #ipfs
tclass has joined #ipfs
infinity0 has joined #ipfs
john2 has quit [Ping timeout: 276 seconds]
anewuser has joined #ipfs
<whyrusleeping>
!pin /ipfs/QmRgeXaMTu426NcVvTNywAR7HyFBFGhvRc9FuTtKx3Hfno dists page with 0.4.5-pre2
<pinbot>
now pinning /ipfs/QmRgeXaMTu426NcVvTNywAR7HyFBFGhvRc9FuTtKx3Hfno
john2 has joined #ipfs
ianopolous has quit [Ping timeout: 245 seconds]
tclass has quit [Remote host closed the connection]
apiarian has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
kulelu88 has joined #ipfs
kulelu88 has quit [Remote host closed the connection]
dryajov has joined #ipfs
dryajov has quit [Client Quit]
dryajov has joined #ipfs
dryajov has quit [Client Quit]
<whyrusleeping>
pinbot, you okay?
<SchrodingersScat>
RIP in pinbot
<whyrusleeping>
:(
chris613 has joined #ipfs
<SchrodingersScat>
it is a pretty big pin
Boomerang has joined #ipfs
<whyrusleeping>
but they have most of it already
<alterego>
Anyone had success using pubsub experiment with http api?
<achin>
ipget doesn't need a running daemon, correct?
<whyrusleeping>
achin: correct
dryajov has joined #ipfs
<whyrusleeping>
alterego: i havent personally done it, but i know orbit uses it through the http api
<whyrusleeping>
Yeah, look at the test its showing as failing, then look at the commit its supposedly testing
ygrek_ has quit [Ping timeout: 276 seconds]
tmg has joined #ipfs
mildred has joined #ipfs
apiarian has joined #ipfs
ianopolous has joined #ipfs
shizy has joined #ipfs
ygrek_ has joined #ipfs
rendar has joined #ipfs
pfrazee has joined #ipfs
<SchrodingersScat>
lgierth: 1.5G Jan 22 17:17 QmR8udjuVj6TG84HsMFJzxjRufqMd4PPGcakvuYZn4vpEy
reit has quit [Quit: Leaving]
<whyrusleeping>
same
shizy has quit [Ping timeout: 255 seconds]
mildred has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
pfrazee_ has joined #ipfs
pfrazee has quit [Read error: Connection reset by peer]
kulelu88 has joined #ipfs
dryajov has joined #ipfs
G-Ray has quit [Quit: G-Ray]
dryajov has quit [Client Quit]
<lgierth>
thank you
pfrazee_ is now known as pfrazee
john3 has quit [Ping timeout: 255 seconds]
mildred has quit [Read error: Connection reset by peer]
ivegotasthma[m] has joined #ipfs
wkennington has joined #ipfs
mildred has joined #ipfs
ygrek_ has quit [Ping timeout: 258 seconds]
reit has joined #ipfs
<dignifiedquire>
kumavis: made some progress, upgraded rust-multibase a bit for my needs in rust-cid which only needs some more tests and clean up now :)
<whyrusleeping>
if anyone wants to try out the new fs-repo-migrations for the 0.4.5 pre-release, that would be very helpful
<whyrusleeping>
you'll have to build them from source, on the fs-repo-migrations 'feat/4-to-5' branch
<Mateon1>
whyrusleeping: I have no backups, so only after saving my pin list :P
<Mateon1>
Is there an easy way to get the object that backs the pin list?
<Mateon1>
Cause ipfs pin ls takes forever
<whyrusleeping>
Mateon1: ehm... not easily
<whyrusleeping>
its in leveldb
<whyrusleeping>
so you'd have to extract that somehow
<Mateon1>
Ah, in that case I'll try all the root hashes that have 2 links and low size
<Mateon1>
Uh...
<Mateon1>
I seem to have like a hundred hashes, all of which are of form: {"data":null,"links":[{"Cid":"...","Name":"direct","Size":...},{"Cid":"...","Name":"recursive","Size":...}]}
<Mateon1>
(upon doing ipfs dag get on respective hash)
<whyrusleeping>
sounds about right
mildred has quit [Read error: Connection reset by peer]
<Mateon1>
But why do I have hundreds of them?
espadrine_ has quit [Ping timeout: 248 seconds]
<whyrusleeping>
because you have changed your pinset hundreds of times and not run a gc?
<Mateon1>
Should I just select the largest by CumulativeSize?
<Mateon1>
Ah
<Mateon1>
True, I have never ran a GC
<Mateon1>
Checking whether something is pinned also takes forever for some reason
ygrek_ has joined #ipfs
<whyrusleeping>
eh, yeah. it has to enumerate the entire pinset
<whyrusleeping>
i'm gonna be working on fixing some of that soon
<ianopolous>
G'day everyone!
<whyrusleeping>
ianopolous: heyo
<ianopolous>
how goes it?
<whyrusleeping>
going pretty good
<whyrusleeping>
released a new pre-release
<whyrusleeping>
and i'm testing the fs-repo-migrations for it
<whyrusleeping>
as soon as we're confident on the repo-migrations i'll release an rc
justin_ has joined #ipfs
justin__ has joined #ipfs
Yatekii has quit [Read error: Connection reset by peer]