<fiatjaf>
spikebike: haven't tested it yet, however
Guest73396 has joined #ipfs
<whyrusleeping>
fiatjaf: once the ipns PR merges (should be today), we can use ipns to build something to do that pretty easily
<fiatjaf>
spikebike: git-annex has a daemon that, as I understand, can sync content, but I don't know if it will be able to copy ipfs data automatically or only on demand.
<fiatjaf>
for my use case it seems good enough, since I'm mostly transferring content inside a LAN and the syncing tools available over the internet suck at this.
<spikebike>
fiatjaf: interesting
<fiatjaf>
whyrusleeping: thank you. I thought IPNS could be used.
<fiatjaf>
whyrusleeping: that would still require a daemon adding the content and fetching it, right?
<whyrusleeping>
yeah, it would
<whyrusleeping>
it would be interesting to apply some of my merkledag diff code
<whyrusleeping>
so if the daemon notices changes on someone elses machine it would do a diff and just apply that diff
<whyrusleeping>
that way multiple people could update at the same time
<whyrusleeping>
and as long as they dont conflict, it should work out nicely
pfraze has joined #ipfs
<fiatjaf>
whyrusleeping: do you have a link to that code?
pfraze has quit [Remote host closed the connection]
* zignig
offers whyrusleeping a high five for ipns awesome.
<whyrusleeping>
you call 'Diff(ctx, nd.DAG, nodeA, nodeB)
<whyrusleeping>
and it returns you a 'changeset'
<whyrusleeping>
zignig: :) thanks!
<whyrusleeping>
its not merged yet though, we're soooo close
<whyrusleeping>
fiatjaf: and you can then call 'MergeDiffs' and pass it two changesets to get a merged changeset, and a list of conflicting changes
<fiatjaf>
well, that's awesome.
<fiatjaf>
I'll give other people a month after the merges. if no one start working on a sync tool like that I will start to get used to ipfs so I can write it :P
<jbenet>
flatjaf: just help us build it
captain_morgan has joined #ipfs
voxelot has joined #ipfs
voxelot has joined #ipfs
<zignig>
whyrusleeping: does the merkel diff have a command line interface ?
wopi has quit [Ping timeout: 264 seconds]
<whyrusleeping>
zignig: nope
<whyrusleeping>
although that would be neat
<zignig>
As an aside... Is there a way to check if you have a hash locally without fetching it ?
captain_morgan has quit [Remote host closed the connection]
morgan__ has joined #ipfs
<achin>
look in ~/.ipfs/blocks ?
<zignig>
achin: :P through the cmd interface.
<achin>
oh!
<whyrusleeping>
zignig: 'ipfs refs local | grep HASH'
<whyrusleeping>
or, when your daemon is offline, just try catting it
wopi has joined #ipfs
<whyrusleeping>
we really do need a --local flag
Quiark has joined #ipfs
<zignig>
sometimes it would be useful to be able to make a promise for a dag , hand back I don't have it but i'm getting it now.
<whyrusleeping>
zignig: thats been brought up
<whyrusleeping>
its tricky to manage those requests internally though
<whyrusleeping>
like, do you ever give up?
<whyrusleeping>
how do i cancel the request?
<whyrusleeping>
how do i notify someone that the request is done?
<zignig>
this was more for an exteral application.
charley has quit [Remote host closed the connection]
<zignig>
but they are valid questions.
<whyrusleeping>
jbenet: got time for ipns?
hellertime has quit [Quit: Leaving.]
<jbenet>
whyrusleeping not atm, unless it's a quick q. ask on github otherwise and will get to it later
<whyrusleeping>
jbenet: okay, the PR is RFM. so i'll ping you there
morgan__ has quit [Quit: Leaving]
captain_morgan has joined #ipfs
kvda has joined #ipfs
jhulten has joined #ipfs
hrjet has quit [Ping timeout: 240 seconds]
<achin>
ion: ping me tomorrow to chat about an IPFS-based DVCS
Guest73396 has quit [Ping timeout: 264 seconds]
Guest73396 has joined #ipfs
Guest73396 has quit [Client Quit]
akhavr has quit [Remote host closed the connection]
akhavr has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
rozap has joined #ipfs
akhavr has joined #ipfs
pfraze has joined #ipfs
hrjet has joined #ipfs
hrjet has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
Guest73396 has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
Guest73396 has quit [Ping timeout: 265 seconds]
akhavr has quit [Read error: Connection reset by peer]
<M-davidar>
ping outbackdingo
akhavr has joined #ipfs
Guest73396 has joined #ipfs
<M-davidar>
ping lgierth
<M-davidar>
.ask lgierth to split the raid on Pollux
akhavr has quit [Read error: Connection reset by peer]
<multivac>
M-davidar: I'll pass that on when lgierth is around.
akhavr has joined #ipfs
M-davidar is now known as davidar
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
jhulten has quit [Ping timeout: 265 seconds]
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
Not_ has quit [Remote host closed the connection]
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
amstocker_ has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
sseagull has quit [Quit: Lost terminal]
voxelot has quit [Ping timeout: 264 seconds]
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
bedeho has quit [Ping timeout: 256 seconds]
<ipfsbot>
[go-ipfs] whyrusleeping created mfs-api-redux (+1 new commit): http://git.io/vcZQc
<jbenet>
davidar: though maybe each app should have its own repo
<jbenet>
(and even not necessarily on ipfs/, **shrug**)
<jbenet>
i'm a fan of smaller repos. go-ipfs is too huge
akhavr has joined #ipfs
<davidar>
jbenet (IRC): sure, I meant as an equivalent to ipfs/archives for coordinating people during the ideas phase, before they're mature enough to be split into their own repo
akhavr has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
<davidar>
Or it can just stay in ipfs/archives, I don't mind
akhavr has quit [Read error: Connection reset by peer]
<hpk>
am also talking with jbenet who is about to submit something wrt ipfs
<hpk>
i am thinking of submitting an updated version of my ep2015 keynote which features ipfs but from a historical perspective
<jbenet>
I've submitted 4 talks
<hpk>
historical techno-political
<jbenet>
i should stop
<hpk>
jbenet: :)
<daviddias>
oh my, it ends tonight
<daviddias>
jbenet: ahaha awesome! :)
<hpk>
daviddias: therefore my ping here :)
<hpk>
jbenet: all lectures?
<jbenet>
daviddias: maybe talk about libp2p and building things with it? or distributed computation
<jbenet>
hpk: yeah....
<daviddias>
jbenet: did you submit for libp2p?
<daviddias>
yes :D
<jbenet>
You will really like this one: Avoiding Another Alexandria
<jbenet>
"(Scientific) Knowledge is humanity's most valuable product. And yet, it is stored in fragile, temporary media. The world stands on the brink of another dark age, or another mass extinction event. We MUST ensure our knowledge endures!
<jbenet>
This talk will discuss why it is important to backup our knowledge, constraints on the problem, and plans to avert catastrophe."
<hpk>
jbenet: maybe a good idea to shift one talk to daviddias?
<hpk>
you are unlikely to get 4 admitted
<jbenet>
daviddias: no i didn't you go for it
<jbenet>
hpk hahaha yeah
<daviddias>
I will then :)
<daviddias>
would ipsurge be interesting to this audience?
<hpk>
i also plan to give one talk about "hacking EU funding" and talking how we got NEXTLEAP into Horizon2020, the 80 billion Euro research framework
<daviddias>
hpk: oh, I want to know more about that!
akhavr has quit [Ping timeout: 240 seconds]
<hpk>
daviddias: wondering if to go for a workshop rather than a lecture for that
<daviddias>
ipsurge is a simple tool (the magic is in IPFS, as always :P) to deploy Native Web Applications (or static web sites) to IPFS
<hpk>
daviddias: so something that could, if combined with gittorrent/gitswarm (presented at DTN) could provide a decentralized github?
<daviddias>
hpk: it would cover the 'static webpages' feature of github, yes :)
Guest73396 has quit [Ping timeout: 265 seconds]
<hpk>
why only static?
<daviddias>
Static as in 'we deliver your application assets to your users, then, if you want, you can use IPFS to store your application data, which is cool, or you can have your server to host an API for your service, which is cool too' :)
<hpk>
sidenote: i guess one neeccessary development for ipfs is to allow multiple parties to update shared state without sharing a private key (or maybe that is not so bad? not sure)
<hpk>
background is what i was discussing with chris ball (author of gittorent) how to get the full github experience, i.e. web interface to PRs, reviews, on top of ipfs
<daviddias>
hpk: I guess we can have IPRS validator mechanisms the validity of the record, based on its signature and the signature key has to be signed by another key, and that another key is what gives validity to all the others to put Records on the DHT that are valid
<hpk>
daviddias: so when i make you a member of my project, your key is signed which allows you to publish updated state to ipns
<daviddias>
it should be easy to add that criteria
<daviddias>
or even, if someone was more control
<daviddias>
the whole validity can be ensured by some central service (that holds your citizen card public key), so that you can verify if that record is valid by asking that service if the sig is valid for that record
<daviddias>
s/was/wants
<multivac>
daviddias meant to say: or even, if someone wants more control
<daviddias>
multivac: you are awesome
Guest73396 has joined #ipfs
* hpk
hasn't heart yet of iprs actually.
<daviddias>
IPRS is how we store 'records' like provider records for e.g, where peers put in the DHT which blocks they have, so that other peers know whom to contact for a given block
<_p4bl0>
jbenet: I really hope your talk about storing scientific knowledge is accepted :)
<jbenet>
_p4bl0 thanks!
<_p4bl0>
it is an important subject and I was just yesterday wondering if arXiv and HAL and other open eprint archive should use IPFS in addition to their many backup
<aar->
to answer my own q from the other day: sysdig is the answer.
<_p4bl0>
it would totally make sense
<jbenet>
_p4bl0 yes it would. we're on it github.com/ipfs/archives
<_p4bl0>
jbenet: awesome
<_p4bl0>
oh btw, I got some archives of YIPL & TAP, I guess it would make sense to publish them on IPFS
<_p4bl0>
actually all of textfiles.com should be in IPFS
pfraze has joined #ipfs
Guest73396 has quit [Ping timeout: 240 seconds]
Guest73396 has joined #ipfs
bashrc_ has quit [Ping timeout: 260 seconds]
Guest73396 has quit [Ping timeout: 250 seconds]
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
bashrc has quit [Ping timeout: 256 seconds]
pfraze has quit [Ping timeout: 272 seconds]
bashrc has joined #ipfs
bashrc has quit [Client Quit]
vijayee_ has joined #ipfs
od1n has quit [Ping timeout: 244 seconds]
sseagull has joined #ipfs
Tv` has joined #ipfs
<dignifiedquire>
hpk: will you be around at tonights devmeetup ?
od1n has joined #ipfs
<hpk>
dignifiedquire: uh, good you mention it again. I aim to, yes.
<hpk>
dignifiedquire: long time not seen!
<hpk>
dignifiedquire: illness mastered?
<dignifiedquire>
hpk: yes, slowly getting back
<dignifiedquire>
hpk: I didn’t get out of bed till last Sunday
voxelot has joined #ipfs
voxelot has joined #ipfs
pfraze has joined #ipfs
Not_ has joined #ipfs
<ion>
Random thoughts: for the time period before every browser has native IPFS support, web
<ion>
Uh, a mis-send on a mobile keyboard
<ion>
A gateway for hosting your own IPFS content (say, everything you have pinned) over HTTP which transparently rewrites all links to other objects with https://ipfs.io/ipfs/… (when you can't provide the bandwidth for a general gateway)
mildred has quit [Ping timeout: 252 seconds]
<ion>
Actually, better to just do HTTP redirects instead of modifying content.
<ion>
So you can update your HTTP website by updating an IPNS name. You could run nginx or equivalent and proxy / to http://localhost:80xx/ipns/blah/ which would take care of redirects for third party objects.
Encrypt has joined #ipfs
<_p4bl0>
what do you call a website on IPFS?
<whyrusleeping>
dagsite!
<noffle>
ipfsite
<whyrusleeping>
(thats not an official term, just what i say)
bashrc has joined #ipfs
<noffle>
(p.s. good morning)
<_p4bl0>
Tor hidden services are onions, I2P's are EepSite, what about IPFS?
lidel has joined #ipfs
<_p4bl0>
IPFSite might work
<_p4bl0>
whyrusleeping: I fear that dagsite is not sexy enough for people who do not know what a DAG is (or how it is relevant here)
<voxelot>
do we call them httpsites now? just going to call my site, mysite.com
<_p4bl0>
voxelot: on http they are websites :)
dignifiedquire has quit [Read error: Connection reset by peer]
<ion>
voxelot: Yeah, my site is called Qmasdfsaddadsaddaadaadfadfssfaasdaasdf
<_p4bl0>
:D
dignifiedquire has joined #ipfs
dignifiedquire has quit [Client Quit]
<ion>
“I” for IPFS. I-site. See what I did there?
* noffle
dies
<cryptix>
ion: dont tell apple :p
<achin>
too late, ion has already been sued
<_p4bl0>
*** ion's new nickname is iOff
<voxelot>
right, get your Qm site today!
<cryptix>
whyrusleeping: do you still need a CR of 1769? I can go through it later actually
ygrek has joined #ipfs
<_p4bl0>
btw, why is there "Qm" at the beginning of every name in IPFS?
<achin>
quick thought before i run away to eat lunch: given a hash, it would be neat if there was a tool that would let be pin a user-specified percentage of the nodes referenced by it
<achin>
IPFS hashes are multihashes. they encode the type of hash function used
<cryptix>
_p4bl0: its the multicodec shining through, inidacting sha256
<achin>
all IPFS hashes at the moment use the same hash function
<whyrusleeping>
cryptix: sure, it might be worth taking a look at
<_p4bl0>
achin: cryptix: oh, thanks :)
<cryptix>
_p4bl0: that it is 'Qm' is a conicidence coming from base58 encoding iir
<cryptix>
c
<achin>
(note that multicodec is something different, which as i understand it, is not implemented in ipfs)
<cryptix>
oh right.. multihash**
* cryptix
looks for this 5th element-gif
<_p4bl0>
haha
<cryptix>
ipfs.pics needs a taging index :P
<ion>
Call it Guten Tag
<ion>
That would be a great name for a Project Gutenberg tag index.
<cryptix>
from gutenberg etc? like it a lot :P
<cryptix>
(btw literaly means 'good day' in german'
<ion>
Yes, that's the joke. :-P
<cryptix>
hahaha <3
<_p4bl0>
I think I like IPFSite (let's say it stands for InterPlanetary Fabulous Site)
ygrek_ has joined #ipfs
devbug has joined #ipfs
atrapado has joined #ipfs
<bashrc>
are the bootstrap hashes peer IDs, or something else?
<bashrc>
ah, it looks like they are
<blame>
What bucket size are you using for kademia?
ygrek has quit [Ping timeout: 256 seconds]
<whyrusleeping>
k=20
<blame>
Is there a specific rationale or 'big enough' I just implemented kademia for UrDHT and I am sanity checking
<whyrusleeping>
Blame: I honestly havent decided on what the 'best' value would be. I want to build a giant testnet and tweak things like that
<blame>
+1
<whyrusleeping>
20 is a decent value i think for most situations though, most nodes will end up in the range of ~100 peers with a network of 100k
<whyrusleeping>
(i think thats right, my math could be off though)
<blame>
Honestly, one of the problems with kademia is bigger k is better with no bound. I think it is O(log(log(n)) not O(1)
<blame>
But you can pick a big enough value easily for bound network sizes
<whyrusleeping>
yeah, k-selection is really just a resource limit
bedeho has joined #ipfs
<ion>
UrDHT, huh? Would it make sense for it to be a part of libp2p?
<whyrusleeping>
ion: UrDHT is (afaik) general enough that we could either interop, or easily swap it in for our DHT
<blame>
UrDHT is (right now) a research project for my PhD
<blame>
it has a few exciting capacities: Abstracted(You can easily modify the code to match any existing DHT) + Supports subnetworks (you can run uniquely identified networks running different logic and store bootstrap peers in the top level DHT)
<blame>
So I just made a subnetwork with different logic to match Kademlia
<blame>
we have chord too
<blame>
And we are working on a paper right now, where we are going to show their churn rate/record loss and size/lookup latency curves
<blame>
using the same networking protocol backend to make the comparison reasonable
<blame>
future work I want to do is use the "generalness" of urdht to try out new things (DHTs that self load balance or embed latency)
<whyrusleeping>
Blame: how are you going to do churn rate and latency/lookup testing?
<blame>
It needs more testing + right now implementation is in python (which makes plugins a lot easier)
<blame>
I should write an IPFS wrapper so you can use it as a subnetwork.
<whyrusleeping>
:D
charley has joined #ipfs
jhulten has joined #ipfs
<achin>
one of these days, i want to try to implement kademlia (or something like it)
<achin>
it seems like it would be a good learning experience
<whyrusleeping>
achin: its pretty fun. Implementing kademlia itself is fairly easy to do if you have experience with network programming already
<whyrusleeping>
just gotta write a k-bucket implementation and code to do 'findpeer'
<whyrusleeping>
the rest just uses findpeer pretty much
<achin>
doing so has been on my todo list for a few years. maybe ipfs has bumped it up a few levels
<whyrusleeping>
achin: well, if you help us implement ipfs in $LANGUAGE, then you could write the kademlia impl for it ;)
<achin>
i'm chipping away at writing utilities in rust that can read/write the on-disk merklenodes. but i'm a long way from writing the network parts
sonatagreen has joined #ipfs
<achin>
i might take a closer look once the UDT stuff is merged, but sadly UDT is a C++ library and there is no good way to bind to C++ from rust
<ion>
Write a C wrapper first. :-P
<achin>
its either write a C wrapper, or reimplement. writing a C wrapper seems much easier
<whyrusleeping>
there is a C wrapper on the UDT code
<achin>
ion: i've been doing some thinking about an IPFS-powered version control system. have some minutes for some thoughts?
<ion>
achin: Sure. The IPFS people are also thinking about that, there's an issue for it.
<achin>
do you know the issue off-hand?
<achin>
the first thing i was wondering: should the commit trees be stored as an ipfs unixfs node? if so, this would mean you could use a vanilla "ipfs get" (or a gateway) to browse the full contents of the tree.
<achin>
the only blocking issue that i can see at the moment is that the unixfs nodes don't track permissions. but i think the ipfs folks want to fix this?
<whyrusleeping>
yeah... we do
<whyrusleeping>
not sure the best way though
<whyrusleeping>
permissions make content addressing get weird
<achin>
if two files have different perms but same content, do they get different hashes?
<achin>
Blame: i currently have in mind a dedicated application that uses git porcelain, and the index plumbing stuff, but a new backend for storing objects in ipfs
<achin>
but i've only spent a few hours thinking about this
<ion>
achin: With IPFS we (collectively) have a chance to write a better CLI than what git has.
ianopolous has quit [Ping timeout: 264 seconds]
<achin>
sure, but i also want to re-use all of the R&D that has gone into git over the years
<ion>
Sure, we should take all the good things about Git. I'm not sure the porcelain is part of it though. :-P The Merkle DAG is, but IPFS has that from the start.
<achin>
(it may be that the porcelain layer is too heavily tied to the git object model to use)
<achin>
i admit more research is needed
<ion>
There are some projects on the web where people have tried to make an alternative Git porcelain. Looking at their ideas might be beneficial.
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed ipns/patches from 41db338 to 3c7c165: http://git.io/vn0bZ
<ipfsbot>
go-ipfs/ipns/patches e202c65 Jeromy: vendor in iptb and kingpin...
<ipfsbot>
go-ipfs/ipns/patches e5512b4 Jeromy: make publish more configurable and add test for repub...
<ipfsbot>
go-ipfs/ipns/patches 3c7c165 Jeromy: Addressing comments from CR...
Encrypt has quit [Quit: Quitte]
<bashrc>
can I use avahi addresses in place of IP addresses for bootstrap nodes? eg. blah.local instead of 192.168.x.y
rendar has quit [Ping timeout: 255 seconds]
pfraze has quit [Remote host closed the connection]
rendar has joined #ipfs
<whyrusleeping>
bashrc: uhm... the multiaddrs only accept ip addresses
devbug has quit [Ping timeout: 240 seconds]
devbug has joined #ipfs
<jbenet>
bashrc: for now -- we should be able to deal with avahi (and dns) soon enough.
pfraze has joined #ipfs
<bashrc>
ok
<bashrc>
I'm currently testing on a mesh network, and because IPs may change it would be more convenient to use the avahi names
bedeho has quit [Ping timeout: 252 seconds]
<whyrusleeping>
bashrc: does your mesh network support mdns?
<ion>
A multiaddr for mDNS might be nice.
bashrc_ has joined #ipfs
<whyrusleeping>
ion: that doesnt exactly make sense
<whyrusleeping>
it would really just be your peerID
voxelot has quit [Ping timeout: 264 seconds]
<ion>
/mdns/foo.local/udp/42
<ion>
bashrc: If the mesh hosts see each other by mdns, would the normal IPFS local discovery work as well?
<whyrusleeping>
mappum: jbenet, a proposal: in the cmds lib, `res.SetError(...)` should take an 'interface{}' as the first argument instead of an error
<whyrusleeping>
and call errors.New() on it if its a string
<mappum>
and default to ErrNormal?
<mappum>
that could make sense
bashrc_ has quit [Ping timeout: 246 seconds]
bedeho has joined #ipfs
<whyrusleeping>
yeah, and we could even leave the error type argument in
<whyrusleeping>
my biggest gripe is having to wrap all my error strings in 'errors.New()' just to pass them into the function
<bashrc>
ion: I havn't tested enought to know yet
<bashrc>
mdns may already be enabled for ZeroNet
<blame>
Do you have a system in place for doing automated testing using multiple nodes? ie scenario testing?
<_p4bl0>
meh, I guess people wants to program in Java in every language
<ansuz>
> get off my lawn
<giodamelio>
I'm not a huge fan of the classes myself, but there are a few features that make a big difference.
<ansuz>
I always liked that javascript was really simple
<giodamelio>
Basicly things that make js more on par with other interpreted languages
<ansuz>
functions provide scope, and can be made to do just about anything
* ansuz
old fashioned
bashrc has quit [Ping timeout: 264 seconds]
bashrc has joined #ipfs
<giodamelio>
I just like things that make it feel more like python(my first love). Also having a real native module system is pretty nice, and a big step forward for the ecosystem
<_p4bl0>
ah, the module system is a good idea
<giodamelio>
Ya, it's pretty crasy how long JS has been around without one. I'm just glad it is standardized so we can get rid of CommonJS, AMD, and the like. No more ugly UMD bits wrapping every random JS library.
reit has quit [Ping timeout: 260 seconds]
reit has joined #ipfs
Not_ has quit [Remote host closed the connection]
Not_ has joined #ipfs
<_p4bl0>
giodamelio: :)
Not_ has quit [Remote host closed the connection]
<giodamelio>
Things like this are why I still love javascript. I know it can be a batshit crasy language(See https://www.youtube.com/watch?v=FqhZZNUyVFM), but the language is in a period of growth and will only keep getting better as time goes on.
Not_ has joined #ipfs
epitron has joined #ipfs
bedeho has joined #ipfs
rendar has quit []
bashrc has quit [Quit: Lost terminal]
<whyrusleeping>
i really wish sharness didnt continue on after a test fails
<whyrusleeping>
and i REALLY wish that it didnt take like fifty ctrl+c's to stop it
devbug has quit [Ping timeout: 264 seconds]
<spikebike>
giodamelio: heh, quite the optimist. I wish all languages got better over time.
ruby32 has joined #ipfs
Not_ has quit [Read error: Connection reset by peer]
akhavr has joined #ipfs
Not_ has joined #ipfs
pfraze_ has joined #ipfs
pfraze has quit [Ping timeout: 272 seconds]
akhavr has quit [Ping timeout: 260 seconds]
sonatagreen has joined #ipfs
akhavr has joined #ipfs
pfraze_ has quit [Remote host closed the connection]
pfraze has joined #ipfs
danielrf has quit [Remote host closed the connection]