<pinbot>
now pinning /ipfs/Qmboof88kq6WUxmVRXBZaYjX2jTZ48V52PbQ8VoHChLWwQ
jedahan has joined #ipfs
<pinbot>
[host 4] failed to grab refs for /ipfs/Qmboof88kq6WUxmVRXBZaYjX2jTZ48V52PbQ8VoHChLWwQ: Post http://[fc3d:9a4e:3c96:2fd2:1afa:18fe:8dd2:b602]:5001/api/v0/refs?arg=/ipfs/Qmboof88kq6WUxmVRXBZaYjX2jTZ48V52PbQ8VoHChLWwQ&encoding=json&stream-channels=true&r=true&: dial tcp [fc3d:9a4e:3c96:2fd2:1afa:18fe:8dd2:b602]:5001: getsockopt: connection timed out
<pinbot>
now pinning /ipfs/QmW5dVT6zajGMHMXrUAYPwj654VfvYvD4T5o7cHuAzN6PU
jedahan has joined #ipfs
<pinbot>
[host 4] failed to grab refs for /ipfs/QmW5dVT6zajGMHMXrUAYPwj654VfvYvD4T5o7cHuAzN6PU: Post http://[fc3d:9a4e:3c96:2fd2:1afa:18fe:8dd2:b602]:5001/api/v0/refs?arg=/ipfs/QmW5dVT6zajGMHMXrUAYPwj654VfvYvD4T5o7cHuAzN6PU&encoding=json&stream-channels=true&r=true&: dial tcp [fc3d:9a4e:3c96:2fd2:1afa:18fe:8dd2:b602]:5001: getsockopt: connection timed out
<DarkFox>
Quick question confirming a limitation of IPFS. Does IPFS *ONLY* work with static, immutable data? (Plus tiny mutable references in IPNS) Notably no intention to add capabilities and interfaces for non-clonable objects?
<Kubuxu>
Not really.
<Kubuxu>
I mean you can do dynamic data
<Kubuxu>
look for example at Orbit and OrbitDB
cwahlers has quit [Ping timeout: 248 seconds]
<DarkFox>
(I have my own project aiming to be a generic framework; while IPFS looks like an application in it's perspective)
jedahan has quit [Ping timeout: 272 seconds]
<DarkFox>
Kubuxu: And that database is implemented as immutable CoW + mutable references to children?
<Kubuxu>
yes and now, it isn't CoW it is append only, as the previous data is stored in separate block that
<DarkFox>
So the previous value is not garbage collected?
<DarkFox>
Append is also still CoW, though sounds like it is in a chain so no GC?
<Kubuxu>
it is not, but you could create one that wouldn't ref all children
<DarkFox>
Right
<Kubuxu>
you could use same mechanism, pub/sub, and send new reference not linking into old data
<Kubuxu>
then the old data would be GCed
<DarkFox>
Yea
ylp has quit [Ping timeout: 265 seconds]
cwahlers has joined #ipfs
<DarkFox>
Kubuxu: What is the state for restricting the sharing of objects? Such that objects may be owned by a group and non-members cannot request for it by the hash/key.
computerfreak has quit [Remote host closed the connection]
<Kubuxu>
There are plans not and start of a progress.
<DarkFox>
Kubuxu: Please re-word that?
<DarkFox>
"There are plans not" ?
<DarkFox>
There are no plans? What progress has been started? Progress on writing out the plans?
<Kubuxu>
There are plans, not much stared implementation wise yet.
<DarkFox>
Okay. Thank you. :-)
<Kubuxu>
We will be adding encrypted objects to ipfs but after ipld is a thing.
<DarkFox>
ipld?
corvinux has quit [Ping timeout: 240 seconds]
<Kubuxu>
Something like jsonld but on IPFS, and much better.
<DarkFox>
Kubuxu: Something I'm focusing on for my project is capability-security; so encrypted objects isn't needed - as you still need to trust the endpoints who are ever given the ability to decrypt the object; as they always have the capability to share further.
<DarkFox>
Kubuxu: Mind trying another attempt to describe ipld? (Or as to what jsonld is?) :D
<Kubuxu>
either way: capability or encryption, whoever you give the access to can copy the data and republish it
<DarkFox>
Kubuxu: Indeed; though if something is a right, then it cannot be cloned; only forwarded and optionally be provisioned. And, capabilities are things you have, not necessarally things you know (like keys).
<DarkFox>
So capabilities here are far better fitting IMHO
<DarkFox>
Assuming that end-to-end encryption is already present.
PseudoNoob has quit [Remote host closed the connection]
<DarkFox>
Kubuxu: So meta including ACLs?
<Kubuxu>
in IPLD: the unixfs ipld format is not yet speced down so I don't know
<DarkFox>
What I'm looking at, looks pretty posix-like ACL
<DarkFox>
"mode: 0755" "owner: jbenet"
<DarkFox>
Just missing group
Encrypt has quit [Quit: Quit]
Nergal has quit [Ping timeout: 255 seconds]
auscompgeek has quit [Ping timeout: 255 seconds]
<Kubuxu>
We have talked about something like this,
<Kubuxu>
I don't remember what the conclusion was.
<keks>
lgierth: used --with-deps today and must say it works pretty guud
<keks>
good
DrWhax_ has joined #ipfs
auscompgeek has joined #ipfs
JesseW has quit [Ping timeout: 265 seconds]
palkeo has quit [Quit: Konversation terminated!]
palkeo has joined #ipfs
palkeo has quit [Ping timeout: 240 seconds]
G-Ray has joined #ipfs
DrWhax_ is now known as DrWhax
<pinkieval>
I would like to create a gateway that only serves “trusted” objects (I don't want my server to serve anything), how could I do this without keeping a whitelist on the server?
tmg has left #ipfs [#ipfs]
pfrazee has joined #ipfs
ilmu has quit [Ping timeout: 265 seconds]
<lgierth>
pinkieval: by keeping a list of allowed objects :) how else are you gonna determine what's trusted?
<lgierth>
in the future you'll be able to serve only objects that have been signed by X
<lgierth>
keks: what did --with-deps exactly do? i didn't try it out and it was hard to figure out from the code
<pinkieval>
“how else are you gonna determine what's trusted?” -> signatures ^^
Akaibu has joined #ipfs
pfrazee has quit [Ping timeout: 260 seconds]
<pinkieval>
also, a possible solution I see is to look at the objects in a node's ipns space
<pinkieval>
but it's more of a hack than a solution
<lgierth>
yeah you can have nginx in front filter everything but /ipns/Qmfoobar
<pinkieval>
or have a custom proxy looking at the content of, say, /ipns/Qmfoobar/whitelist
cemerick has joined #ipfs
<lgierth>
yeah that'd work too -- nginx has lua and js modules
Johnny_ has quit [Ping timeout: 248 seconds]
ylp has quit [Ping timeout: 244 seconds]
zorglub27 has joined #ipfs
<keks>
it updates a dep in the current repo and all the dependencies
<keks>
e.g. it updates go-multiaddr in the current repo and all repos that are in the current repo's dep-tree
<keks>
lgierth:
<keks>
also i found the code to be pretty straight forward
maxlath has joined #ipfs
<keks>
it iterates over the deps, and either replaces it iff it is the one to be replaced or recurses if it is a different repo
Qwazerty2 is now known as Qwazerty
zorglub27 has quit [Ping timeout: 240 seconds]
maxlath is now known as zorglub27
<lgierth>
okay but then you still need to go and update the respective deps' code in git eh?
<keks>
hmm not sure where it keeps the changes
TheWhisper_ is now known as TheWhisper
ylp has joined #ipfs
<keks>
lgierth: i agree it would be nice if it automatically filed PRs :)
<Kubuxu>
if we could do this but say: update across those locations that, and then use myrepos + hub it could be automated
ygrek has joined #ipfs
<lgierth>
oh yeah, hub
<Kubuxu>
world wants to stop me from fixing go-reuseport
<Kubuxu>
fist I missed my bus
<lgierth>
Kubuxu: go do the reuseport thing if you want, i can take the deps
<Kubuxu>
now that I got onto a bus the AC power is broken
ulrichard has joined #ipfs
deltab has quit [Read error: Connection reset by peer]
anonymuse has quit [Read error: Connection reset by peer]
nonaTure has joined #ipfs
zorglub27 has quit [Ping timeout: 276 seconds]
clownpriest has joined #ipfs
anonymuse has joined #ipfs
deltab has joined #ipfs
anonymuse has quit [Remote host closed the connection]
ygrek has quit [Ping timeout: 265 seconds]
ckwaldon has quit [Ping timeout: 250 seconds]
kragniz has quit [Quit: WeeChat 1.5]
kragniz has joined #ipfs
kulelu88 has joined #ipfs
dignifiedquire has joined #ipfs
ckwaldon has joined #ipfs
ygrek has joined #ipfs
cemerick has quit [Ping timeout: 240 seconds]
Johnny_ has joined #ipfs
ashark has joined #ipfs
ljhms has quit [Ping timeout: 265 seconds]
nonaTure has quit [Ping timeout: 272 seconds]
<_mak>
is it possible to pin a list fo hashes by pointing to a file with the hashes?
ljhms has joined #ipfs
<keks>
xargs ipfs pin < file
<_mak>
do you know of an equivalent for windows?
<keks>
erm, no sorry
<_mak>
cool, thanks anyway
ecloud has quit [Ping timeout: 240 seconds]
<keks>
well you could use search and replace on the file to replace newline by spaces and then start the line with "ipfs pin add " and call it as a batch file
<keks>
also if you have git installed you have a bash that you can run the previous snippet in
<keks>
_mak: ^^^
<_mak>
keks: yeah, I wanted to use it to transport the hashes.
<_mak>
I'll probably go with a batch file and with the commands as you say
<keks>
i hope this works, I think there are limitations on the maximum line length
<keks>
at least there is in linux
shizy has joined #ipfs
<keks>
or you just put "ipfs pin add " in front of every single line. that should work but might be a bit slower? don't know
ecloud has joined #ipfs
apiarian has quit [Ping timeout: 244 seconds]
apiarian has joined #ipfs
eclaire has joined #ipfs
ygrek has quit [Remote host closed the connection]
ygrek has joined #ipfs
ulrichard has quit [Ping timeout: 265 seconds]
ygrek has quit [Read error: Connection reset by peer]
ygrek has joined #ipfs
ygrek has quit [Read error: Connection reset by peer]
Guest20437 has quit []
ygrek_ has joined #ipfs
ygrek_ has quit [Remote host closed the connection]
mildred has quit [Quit: Leaving.]
jedahan has joined #ipfs
palkeo has joined #ipfs
palkeo has joined #ipfs
palkeo has quit [Changing host]
ylp has quit [Quit: Leaving.]
nonaTure has joined #ipfs
matoro has quit [Ping timeout: 260 seconds]
apiarian has quit [Quit: zoom]
PseudoNoob has joined #ipfs
dmr has quit [Quit: Leaving]
WhiteWhale has quit [Ping timeout: 265 seconds]
sametsisartenep has joined #ipfs
nonaTure has quit [Ping timeout: 240 seconds]
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 265 seconds]
achin has quit [Changing host]
achin has joined #ipfs
pfrazee has joined #ipfs
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Tv` has joined #ipfs
pfrazee has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
anewuser has joined #ipfs
<lgierth>
Kubuxu: sigh, the github api doesn't return all repos on /orgs/ipfs/repos -- e.g. go-blocks is missing
Johnny_ has quit []
<lgierth>
Kubuxu: ah, there's pagination, with per_page maximum 100
<lgierth>
non-discoverable pagination is annoying
ulrichard has joined #ipfs
m0ns00n has quit [Quit: quit]
erde74 has joined #ipfs
ulrichard has quit [Quit: Ex-Chat]
s_kunk has quit [Ping timeout: 265 seconds]
palkeo has quit [Ping timeout: 264 seconds]
palkeo has joined #ipfs
<lgierth>
Kubuxu: i got 113 repos for the ipfs org
ygrek has joined #ipfs
<Kubuxu>
lgierth: let me give you my script
<Kubuxu>
shet
<Kubuxu>
I am stupid
<Kubuxu>
I didn't sync it
<lgierth>
Kubuxu: i just did pagination in bash lol
<lgierth>
expr and recursion for the win
<Kubuxu>
but it was using Google's github api for go and cloning and doing mr register reponame
<lgierth>
but anyhow, i'm convinced a curated mrconfig of only the repos we need is better
<lgierth>
e.g. all the go- repos in ipfs/ipld/libp2p/multiformats
<Kubuxu>
yeah, it was only for bootstrap
<lgierth>
plus a few that are in individual's accounts
<Kubuxu>
we can have mrconfig in the root of go/src/
<Kubuxu>
as it can address repos deeper in the tree
<Kubuxu>
but we need a way to do the updates right
<lgierth>
let me get another coffee
<lgierth>
gx updates?
<Kubuxu>
as they have to cascade
<Kubuxu>
yup
<lgierth>
yeah we'll still have to do them inside-out
<Kubuxu>
it could be quite simple
jedahan has joined #ipfs
<lgierth>
but it's still a lot nicer if we don't have to touch each affected repo individually
ilmu has joined #ipfs
<Kubuxu>
I have an rough idea for such script.
<lgierth>
mmh there's a probably a way to do the bubbling-up automatically
<lgierth>
do one update, then see which repos where affected, to the same for these
<Kubuxu>
or wait
<lgierth>
bonus points: have automatic commit messages all the way through which capture the changes in the dependencies
<lgierth>
instead of just "lol update go-libp2p" :)
cemerick has joined #ipfs
<Kubuxu>
create fifo, put updated hash into it, then: while read hash from fifo: for repo in repos: if repo contains dep_from_hash(hash): gx update repo; gx version patch; gx publish; add just created hash to fifo; fi; done; done
anewuser has quit [Read error: Connection reset by peer]
<lgierth>
if we're smart about reading from the fifo, we can properly batch updates
<lgierth>
e.g. if we update go-log, and then need to update multiple dependencies of go-ipfs because of that
<lgierth>
or we can just gx update in every repo, and let gx figure whether it should actually do anything
gmcquillan__ has joined #ipfs
gmcquillan__ is now known as gmcquillan___
G-Ray has quit [Quit: Konversation terminated!]
<Kubuxu>
we can also just create graph and check how dependencies cascade using it
<Kubuxu>
but it is more advanced solution that one have to work on
<lgierth>
yeah...
<lgierth>
let
<lgierth>
let's try out plain mr helpers for a bit?
<Kubuxu>
ok
<lgierth>
and i was thinking that `gx update <pkgname> <newhash>` might be a better interface, it can help us not forget to update some package just because it's behind
<lgierth>
we can grab the current hash from package.json
<Kubuxu>
gx update <newhash> - check name of package <newhash> and updates this name in the package.json
<lgierth>
hah yeah that works too ;)
<lgierth>
jq .name
gamemanj has joined #ipfs
espadrine has quit [Ping timeout: 265 seconds]
ygrek has quit [Ping timeout: 244 seconds]
ilmu has quit [Ping timeout: 265 seconds]
Aranjedeath has joined #ipfs
<Aranjedeath>
lmao new rule: can't run ipfs AND bittorrent tracker simultaneously
<gamemanj>
the big slope could just be user disconnect-reconnect because of the daily cycle,
<Aranjedeath>
(none of the rest of those graphs are mine, just the bittorrent related ones)
<Aranjedeath>
(in case anyone goes looking)
<gamemanj>
but the really fast oscillation makes no sense unless they're all in sync
<Aranjedeath>
well, I get scrapes from a few boxes on a cronjob so
<Aranjedeath>
there's one box that fullscrapes on an interval
<Aranjedeath>
I... think it's either a torrent search engine or a media company
<gamemanj>
It's about 100 requests or so difference in some places, though.
<Aranjedeath>
anyway, everything in torrents is like that. my graphs have been that way since I've been running it
<Kubuxu>
gamemanj: probably some client requesting upates every hour with some +-15 mins or omethin
<Aranjedeath>
rolling stampedes
<Aranjedeath>
yeah the target is announces every 35 minutes I think
<Aranjedeath>
it picks an announce interval for them the first time to bucket them in a way that smooths the load out
<Aranjedeath>
this thing doesn't graph the announce buckets, but the stats for that are publicly accessible
<lgierth>
whyrusleeping kubuxu: pushing v0.4.3 to the release branch
<Aranjedeath>
eee
<lgierth>
so that docker hub picks it up for the release channel
<Aranjedeath>
props lgierth <3
WhiteWhale has joined #ipfs
* Aranjedeath
is that dude who goes into unrelated channels and talks about bittorrent trackers lol
* Aranjedeath
woops
<lgierth>
i think i've even witnessed you walking into random rooms
<lgierth>
:)
ilmu has joined #ipfs
<lgierth>
whyrusleeping: sorry for the merge commit :)
<Aranjedeath>
yes
<Aranjedeath>
yes I have
<Aranjedeath>
both irc "rooms" and real ones
Encrypt has quit [Quit: Dinner time!]
<Aranjedeath>
suddenly I appear "have you heard about our lord and savior, bittorrent?"
<lgierth>
:)
<lgierth>
ok lars out
<gamemanj>
but you have to find the torrent
<gamemanj>
and that's work
<kulelu88>
If I start loading up a majority of the peers for a certain IPFS filesystem, can I conduct a 51% attack?
wuch has joined #ipfs
<Aranjedeath>
gamemanj: it's true! and to shirk all legal liability, I don't keep those!
<Aranjedeath>
I'll let the search engines tango with rightsholders. #notMyProblem
<gamemanj>
kulelu88: It's not like you can magically make a peer believe that a bad signature matches the key.
ilmu has quit [Ping timeout: 260 seconds]
<gamemanj>
kulelu88: And everything else is hashes.
jokoon has quit [Quit: Leaving]
<kulelu88>
gamemanj: so the files will be signed somehow to indicate any changes to them? (like a git commit?)
<gamemanj>
kulelu88: Things are referred to by hash.
<gamemanj>
kulelu88: And IPNS records are signed records that refer to a hash.
<kulelu88>
interesting. but if I control 51+% of the peers can't I conduct an MiTM?
<gamemanj>
And do what?
<kulelu88>
inject malicious content?
ilmu has joined #ipfs
<gamemanj>
How? If everything is referred to by hash, then your malicious content's hash won't match what's being requested, so it'll be treated as different content.
<gamemanj>
If I had to suggest an attack method, it would be a hash collision attack.
<gamemanj>
Since, uh, everything's based on hashes.
<gamemanj>
The IPNS records also have signatures, but they are indexed by the hash of the public key.
<gamemanj>
Owning 51% of the nodes won't make hash("abc") == hash("123").
<kulelu88>
interesting. I'll have to read more about this
<gamemanj>
There's probably some attack you could do that disables the network, though, like sending traffic to every node on the planet, if you have enough resources to perform a 51% attack.
<gamemanj>
IIRC, 51% attacks are only really meaningful in the context of blockchains, where everybody is continuously performing computations and the system can be taken over if you have a longer blockchain versus other people.
WhiteWhale has quit [Ping timeout: 244 seconds]
PseudoNoob has quit [Remote host closed the connection]
<Aranjedeath>
yeah I could see someone spamming bad hashes and knocking some of the shittier nodes off (DoS from excess cpu usage)
<Aranjedeath>
or just boring old ddos
<gamemanj>
step 1, find the most expensive operation in the system
<gamemanj>
step 2, spam that operation
<Aranjedeath>
yeah, basically
<Aranjedeath>
that's more likely to result in badness if nothing else
<Aranjedeath>
extreme, system slowing, annoyance
<gamemanj>
I'm already seeing some TODOs of death
Encrypt has joined #ipfs
<Aranjedeath>
haha
<Aranjedeath>
yeah that's basically it
<Aranjedeath>
TODO: make this faster
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jedahan has joined #ipfs
asdfsdfsf has joined #ipfs
ilmu has quit [Ping timeout: 248 seconds]
jedahan has quit [Ping timeout: 265 seconds]
jedahan has joined #ipfs
jedahan has quit [Remote host closed the connection]
jedahan has joined #ipfs
nonaTure has joined #ipfs
asdfsdfsf has quit [Quit: Page closed]
wallacoloo has joined #ipfs
ilmu has joined #ipfs
matoro has joined #ipfs
jedahan has quit [Client Quit]
jedahan has joined #ipfs
captain_morgan has joined #ipfs
ilmu has quit [Ping timeout: 272 seconds]
LegalResale has quit [Quit: Leaving]
LegalResale has joined #ipfs
ppham has quit [Remote host closed the connection]
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
erde74 has quit [Quit: Verlassend]
gamemanj has quit [Quit: If there are whole sections of human minds dedicated to finding order in the oddest of places, and yet no order is true, what does that say about the human mind?]
spilotro has quit [Ping timeout: 260 seconds]
jedahan has joined #ipfs
spilotro has joined #ipfs
<victorbjelkholm>
chriscool1, saw your work on the Gitlab pipeline stuff, looks good so far. How is the integration between Github and Gitlab working so far? It's possible to build PRs from Github on Gitlab?
<victorbjelkholm>
Using gitlab for some personal projects and love the whole repo/issues/milestones/pipelines/registry solution, but worried how the pipelines/builds links together with Github
<kulelu88>
chriscool1 works for gitlab?
<chriscool1>
victorbjelkholm: thanks, the integration between GitHub and GitLab is really smooth I think
<chriscool1>
kulelu88: yeah I am working for GitLab, Booking.com and Protocol Labs
<Aranjedeath>
hahawoot
<victorbjelkholm>
chriscool1, yeah, seems the syncing between github -> gitlab works fine, just wondering about getting the builds for PR and all that setup without dealing too much with setting up custom webhooks and such. What you think?
anonymuse has quit [Remote host closed the connection]
<victorbjelkholm>
I'm so sad that BuildKite doesn't have public builds...
<kulelu88>
how is that possible? chriscool1
<kulelu88>
oh, free work you mean
<chriscool1>
victorbjelkholm: for now I haven't tried to have the badges with the tests result shown on GitHub
<kulelu88>
So I have an outrageous idea: a decentralized bigchain database, backed up on IPFS. Anybody with some positive words of encouragement? :D
<chriscool1>
that could need a bit of tweeking but it shouldn't be very difficult
<chriscool1>
kulelu88: you mean how is it possible for me to work for those 3 companies?
<kulelu88>
chriscool1: as a contractor, I can understand. However, full-time wouldn't be possible
<chriscool1>
yeah I am a contractor
<chriscool1>
kulelu88: a decentralized bigchain database, backed up on IPFS: yeah go for it!!!
<kulelu88>
chriscool1: I figured that the best way to store a decentralized DB is on a decentralized file-system? no? :D
<kulelu88>
possibly outrageous idea chriscool1 : run 3 different self-hosted CI tools and let them all build and sign their builds. If signatures match ...
<chriscool1>
I think we want to get rid of many CI tools as it makes things complex to manage: many config files, many logins, ...
<kulelu88>
chriscool1: no no, don't run them on the free infrastructure. Self-host them.
clownpriest has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
WhiteWhale has joined #ipfs
clownpriest has joined #ipfs
<chriscool1>
kulelu88: but how self-hosting will reduce complexity? On the contrary I would think it would make things harder to maintain.
<kulelu88>
chriscool1: well it is a case of paranoia for making sure the builds are valid across options
<Aranjedeath>
this is why I build everything from scratch on freebsd
<Aranjedeath>
easy to find problems with assumptions :D
<Aranjedeath>
next level: openbsd
<Kubuxu>
we are using CIs only for tests
<Kubuxu>
we are not doing continuous deployment as of right now
<Kubuxu>
problem with not self hosing build servers are build queues in most cases
<Kubuxu>
see 3h+ travis queues on ipfs org
Aeon has quit [Remote host closed the connection]
<Aranjedeath>
if (queue_depth > queue_average_cadence) { self-host(); }
<Kubuxu>
something like that could work but it would mean that we would have to have self hosting as an option since the begging
<Aranjedeath>
ya
<Kubuxu>
at which point we can self host either way...
<Kubuxu>
and just have elastic AWS instances or something like that
<Aranjedeath>
template the buildbot and launch a spot instance or something
<Kubuxu>
yeah
cemerick has quit [Ping timeout: 240 seconds]
<Kubuxu>
to reduce cost you can have X runners on some dedi and then start AWS if that is not enough
G-Ray has joined #ipfs
Encrypt has quit [Quit: Sleeping time!]
clownpriest has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<chriscool1>
Kubuxu: on https://gitlab.com/, you can configure as many self hosted Runners as you want on different OS/platforms
<chriscool1>
so you can avoid big queues and you only have the self hosted Runners and the machines on which they are running to configure and manage
<kulelu88>
does anybody else use bigchainDB here?
<chriscool1>
and the machines hosting the runners could also be AWS machines
ebel has quit [Ping timeout: 248 seconds]
ebel has joined #ipfs
skoocda has joined #ipfs
<victorbjelkholm>
my experience tells me that self-hosting the build servers become very costly when they fail, and they will fail at one point AND another
skoocda has quit [Remote host closed the connection]
skoocda has joined #ipfs
keks has quit [Ping timeout: 264 seconds]
clownpriest has joined #ipfs
Aeon has joined #ipfs
Aeon is now known as Aeonwaves
<lgierth>
richardlitt: what what what i'm leading the all hands call? i guess on the upside that means i don't have to take notes :P
<lgierth>
(all good!)
rgrinberg has joined #ipfs
<richardlitt>
You're the moderator buddy! :D
<lgierth>
just forgot i had volunteered
Mitar has quit [Ping timeout: 265 seconds]
Mitar has joined #ipfs
ashark has quit [Ping timeout: 244 seconds]
chriscool1 has quit [Ping timeout: 264 seconds]
computerfreak has joined #ipfs
ppham has joined #ipfs
dmr has joined #ipfs
G-Ray has quit [Quit: Konversation terminated!]
<victorbjelkholm>
anyone else remembers that lgierth volunteered to be the moderator for the next ten calls? That's what I remember, hmm...
taaem has quit [Ping timeout: 240 seconds]
pfrazee has quit [Remote host closed the connection]
anewuser has joined #ipfs
HastaJun_ has quit [Ping timeout: 240 seconds]
ppham has quit [Remote host closed the connection]
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
TheNain38 has joined #ipfs
rendar has quit [Quit: Leaving]
HastaJun_ has joined #ipfs
ulrichard has joined #ipfs
ckwaldon has quit [Remote host closed the connection]
ulrichard has quit [Client Quit]
gmcquillan___ has quit [Quit: gmcquillan___]
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
shizy has quit [Ping timeout: 250 seconds]
skoocda has quit [Quit: Leaving]
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 255 seconds]
ppham has joined #ipfs
anewuser has quit [Ping timeout: 240 seconds]
TheNain38 has quit [Quit: I'm going away]
sametsisartenep has quit [Quit: leaving]
matoro has quit [Ping timeout: 272 seconds]
lupi_ has joined #ipfs
lupi_ has quit [Max SendQ exceeded]
lupi_ has joined #ipfs
ppham has quit [Remote host closed the connection]
lupi_ has quit [Max SendQ exceeded]
lupi_ has joined #ipfs
lupi_ has quit [Max SendQ exceeded]
lupi_ has joined #ipfs
ppham has joined #ipfs
lupi_ has quit [Max SendQ exceeded]
lupi_ has joined #ipfs
ppham has quit [Remote host closed the connection]