avsm changed the topic of #mirage to: Good news everyone! Mirage 3.0 released!
<dmj`>
hannes: do you know if mirage gets linked w/ musl
<dmj`>
well, unikernels created with mirage
<dmj`>
ah, it probably just uses glibc if it uses solo5 right
dudelson has joined #mirage
dudelson has quit [Ping timeout: 260 seconds]
betheynyx_ has joined #mirage
copumpkin has quit [*.net *.split]
ptrf has quit [*.net *.split]
srax has quit [*.net *.split]
betheynyx has quit [*.net *.split]
dudelson has joined #mirage
copumpkin has joined #mirage
srax has joined #mirage
copy` has quit [Quit: Connection closed for inactivity]
argent_smith has joined #mirage
_whitelogger has joined #mirage
<hannes>
dmj`: there's no libc involved at all
<dmj`>
hannes: I assume it statically links a c library at some point
<hannes>
no.
<hannes>
there are memcpy, memmove, memcmp, and an alloc implementation, but not a C library.
<hannes>
back in mirage1 days, there used to be a dietlibc in the build process, but it is no longer...
<hannes>
lots of symbols (like getenv) which are used by the OCaml runtime are implemented as stubs: they return NULL or -1
<dmj`>
hannes: hmm, that's cool. So what about Ocaml code that ffi's into c? Does solo5 provide a posix compatibility layer
<hannes>
there is no posix compatibility layer.
<dmj`>
Ah, I see.
<hannes>
some bits and pieces are in C, such as AESNI (and the AES implementation), but these are only using the above mentioned functions...
<hannes>
I believe libgmp (via zarith) is the biggest C library - but this also works without posix
<dmj`>
I see. Sure, so an ocaml application is statically linked with gmp.a and solo5, then runs on Xen (or any other hypervisor)
<hannes>
it is a custom built of gmp.a. and solo5 provides a kvm/virtio backend, while xen is a separate one (thus you won't have solo5 and xen both in the same vm)
<dmj`>
so solo5 let's you target different backends
<hannes>
dmj`: yes, atm kvm and virtio. and there's a prototype port to hypervisor.framework (MacOSX) in a branch
yomimono has joined #mirage
<kensan>
dmj`: There is also a Muen SK port but I am waiting for the Solo5 restructuring work to land.
<hannes>
kensan: how is muen compared to l4?
<kensan>
hannes: A big difference is that Muen is not trying to be a general-purpose microkernel but instead it is a separation kernel.
<kensan>
hannes: There is not dynamic resource management at runtime.
<kensan>
hannes: Basically the sole purpose is to isolate subjects from each other.
<kensan>
hannes: Resource allocation is done at integration time. This includes memory, devices, CPU execution time etc.
<kensan>
hannes: On Muen we use Intel VT-x/VT-d for subject separation. So virtualization is basically our means to isolating subjects.
<kensan>
hannes: It allows us quite fine grained control of what is allowed/exposed to subjects and what is forbidden.
<kensan>
hannes: e.g. access to the timestamp counter (rdtsc) is trapped since we do not want to give subjects access to such a high resolution time source by default.
<kensan>
hannes: Another difference to probably most other kernels is that it is *not* written in C ;)
<hannes>
kensan: where is the PCI network devices handled and how is a virtual NIC provided?
<hannes>
(and thanks for your answers, I was showering)
<kensan>
hannes: Ah I forgot: Muen does not provide a complete IPC interface/API.
<kensan>
hannes: There are what we call events which basically allow a subject to send an interrupt to another subject.
<kensan>
hannes: and then there is shared memory. These basic building blocks enable subjects to implement communication protocols.
<kensan>
hannes: One that we use is "shared memory streams" or channels: unidirectionaly memory with an optional event for signalisation.
<hannes>
kensan: is the dom0 involved in setting up shared memory between guests (I guess it has to)? and so I assume there's a driver guest which then passes all the received packets to all interested other guests?
<kensan>
hannes: regarding PCI nic: in the Mirage/Solo5 demo system there is a Linux subject that has the NIC assigned.
<kensan>
hannes: We have a Linux kernel module which implements a virtual NIC, that uses shared memory to push packets across to other subjects.
<hannes>
kensan: thx.. really need to play around with muen at some point...
<kensan>
hannes: No, there is no Dom0 or privileged subject. On Muen the communication channels and events are also static and defined at integration time in the system policy.
<kensan>
hannes: So the Solo5 unikernel that receives network packets from Linux does not have the Linux itself in its TCB.
<kensan>
hannes: e.g. if you terminate an encrypted connection inside the unikernel.
<kensan>
hannes: obviously the Linux can do DoS by simply not pushing the network pakets across etc.
<kensan>
hannes: But there is no privileged Dom0 that could alter the memory layout of the system etc.
<hannes>
ic
<kensan>
hannes: and since for Solo5/Muen all devices are implemented without the need for a Monitor, there is also no VMM in your TCB.
<kensan>
hannes: It enables the construction of really lean/stripped down systems.
<kensan>
hannes: Compared to Xen or other general-purpose hypervisors the price is of course runtime dynamism since on Muen *all* resource allocation is static.
<kensan>
hannes: that is why it is more suited for building special-purpose system with an embedded style of development where you know your hardware well etc.
<kensan>
hannes: imho it is taking the special-purpose idea that Mirage applies to "VMs" to the encompass the entire system.
<kensan>
s/the//
<kensan>
hannes: Having said that, it does not mean that the range of systems one can implement are restricted to very boring scenarions ;)
<gjaldon__>
does using a unikernel running on kvm mean that I need to have a Linux OS installed to run the unikernel?
<dstolfa>
gjaldon__: Mirage runs on a wide variety of things. You don't have to run ukvm, you can run virtio, which should run on any hypervisor supporting virtio
<dstolfa>
I'm freely running Mirage on FreeBSD(though it doesn't have native bhyve support yet, but I'm told it's in the works)
<dstolfa>
KVM-wise, I'd imagine it would run on illumos too
dudelson has joined #mirage
<gjaldon__>
dstolfa: I think I read a press release where they mentioned bhyve support in Mirage 3.0. I may be wrong though
<dstolfa>
gjaldon__: Well, bhyve already is supported, it just goes through VirtIO
<hannes>
gjaldon__: you'll need some sort of Unix atm to run MirageOS, yes.. for the drivers (and/or look at kensan's description above on how to use the muen sk and contained network device drivers (from a linux, running as guest))
<hannes>
(kensan: sorry, I cannot adapt to your language of subjects yet ;)
<gjaldon__>
The 'virtualization ecosystem' is a new world to me and still figuring things out
<abeaumont>
hannes: could you tell me the name of the author of the forced_lto branch? failed to find it on a quick search
<abeaumont>
hannes: happy bday, btw :)
<dstolfa>
hannes: Oh, is it your birthday? Happy birthday!
<gjaldon__>
happy bday, hannes! :D
<hannes>
abeaumont: https://github.com/ocaml/ocaml/pull/608 is the PR (where a report "tried with the following packages/command line invocation: didn't work" would be appreciated)
<hannes>
abeaumont: it seems you need to pass -lto somewhere...
<hannes>
thx people
<hannes>
and now off to my office
<abeaumont>
hannes: oh... that may be the problem then, i'll check it, thx!
<gjaldon__>
dstolfa: been reading a bit on virtio and was wondering if I understood correctly. virtio provides virtualization for network and device drivers and KVM is the hypervisor? virtio also provides a standard api and many hypervisors work with virtio? what is ukvm?
<dstolfa>
gjaldon__: VirtIO is just an API, any hypervisor can provide it, it works by having these queues called "virtqueues", through which you can send serialized control messages as defined per-driver
<dstolfa>
gjaldon__: The communication is bi-directional and generally works by having a device driver on the guest side, and a userspace device providing information to it on the host side
<dstolfa>
The nice thing about it is that it provides a way to bulk transfer things with 1 context switch
<dstolfa>
So I can push things into the virtqueue and then notify all at once
<kensan>
hannes: No worries. Sorry, its all a little confusing with such terms as subjects, components, partitions, guests, VMs... ;)
<dstolfa>
kensan: I don't have too much time now, but I'd be interested to hear about the performance implications of what you've done :)
<kensan>
dstolfa: That is a tough subject since there is no general statement I could give...
<dstolfa>
kensan: Well, I don't think anyone can give a general statement about the performance of their system
<dstolfa>
I was hoping to discuss some of the specifics, but alas I don't have time right now
<kensan>
dstolfa: Looking at it one would think performance-wise our approach is a no-go. Turns out it can perform quite well if it fits your usecase...
<kensan>
dstolfa: You can always reach me via email.
<dstolfa>
kensan: Sure thing, but if you're going to be around here later on today, I can message you here :)
<gjaldon__>
dstolfa: ok so VirtIO is an api and it uses "virtqueues" for communication between guest and host? how does VirtIO compare to UKVM?
<kensan>
dstolfa: A lot of people focus on the guest state switching which is certainly more costly on Muen since "everything" is inside a VM vs. microkernels where its just ring3-ring0 transitions etc.
<kensan>
dstolfa: For more detailed discussions I actually prefer mail but I should be lurking.
<kensan>
;)
<dstolfa>
gjaldon__: I don't know how it specifically compares with respect to Mirage, you'd have to ask someone who's worked on it. UKVM essentially implements a unikernel monitor if I've gathered it correctly, which talks to KVM itself. In FreeBSD, this is something like the vmmapi in bhyve. With VirtIO, they'd need to implement the drivers in the unikernel
<dstolfa>
kensan: sounds good :)
<dstolfa>
With KVM itself, they'd probably need to use some kind of paravirt interface(again, don't know for sure), such as hypercalls to communicate from the guest kernel
<gjaldon__>
dstolfa: thank you for the explanations. I appreciate it.
<hannes>
abeaumont: well, the ocaml+lto compiler should actually do this by default (but if I recall you&chris in marrakesh, it didn't do anything)
<kensan>
dstolfa: Just out of curiosity, what kind of projects are you working on/involved with?
<dstolfa>
kensan: I've been working on a tracing system that allows you to have global state across virtual machines
<dstolfa>
basically, extending DTrace to allow you to trace virtual machines as you do locally
<dstolfa>
But then have global state across them
<kensan>
dstolfa: Ah, sounds interesting.
* kensan
is off for a bit, coffee is brewing.
_whitelogger has joined #mirage
<hannes>
abeaumont: according to how I understand it, the 4.04.0+forced_lto should enable lto by default... but that seems to be broken... will try to find time to investigate
insitu has joined #mirage
dudelson has quit [Ping timeout: 260 seconds]
insitu has quit [Read error: Connection reset by peer]
dudelson has joined #mirage
insitu has joined #mirage
insitu has quit [Client Quit]
dudelson has quit [Ping timeout: 246 seconds]
seliopou_ is now known as seliopou
<abeaumont>
hannes: thanks, will try to take a deeper look at it too
insitu has joined #mirage
<argent_smith>
hannes: is there a way to use ppxes inside of unikernel code? e.g. lwt.ppx, etc.? including them in packages list in config.ml breaks make depend.
<hannes>
argent_smith: you can inside of a _tags file specify package(lwt.ppx), nothing in config.ml..
<argent_smith>
thanks a ton!
Guest45610 has joined #mirage
argent_smith has quit [Quit: Leaving.]
Guest45610 has quit [Quit: ...]
<gjaldon__>
I have my blog as a unikernel(mostly ripped code from the static-website unikernel in mirage-skeleton). I want to deploy it to Google Compute Engine. I already have the compiled output using `ukvm` as the target. how do I go about adding the image to GCE? which of the compiled outputs is the image? I tried googling but couldn't find guides on how to run a
<gjaldon__>
Mirage unikernel on GCE
insitu has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<kensan>
gjaldon__: My guess is you need to use the virtio target.
<kensan>
gjaldon__: The solo5 readme mentions Google Compute Engine in the "Running virtio unikernels with other hypervisors" section.
<gjaldon__>
kensan: thanks! looks like the solo5 readme has what I need :D
<kensan>
gjaldon__: No problem.
<kensan>
Ironed out some Solo5/Muen issues and now got it to server the Muen project website :)