Magnus Skjegstad

Just-in-Time Summoning of Unikernels (v0.2)

Jitsu - or Just-in-Time Summoning of Unikernels - is a prototype DNS server that can boot virtual machines on demand. When Jitsu receives a DNS query, a virtual machine is booted automatically before the query response is sent back to the client. If the virtual machine is a unikernel, it can boot in milliseconds and be available as soon as the client receives the response. To the client it will look like it was on the whole time.

Jitsu can be used to run microservices that only exist after they have been resolved in DNS - and perhaps in the future can facilitate demand-driven clouds or extreme scaling with a unikernel per URL. Jitsu has also been used to boot unikernels in milliseconds on ARM devices.

A new version of Jitsu was just released and I'll summarize some of the old and new features here. This is the first version that supports both MirageOS and Rumprun unikernels and uses the distributed Irmin database to store state. A full list of changes is available here.

A unikernel experiment: A VM for every URL

I recently wrote a DNS server that can boot unikernels on demand called Jitsu. The following diagram shows a simplified version of how Jitsu works. The client sends a DNS query to a DNS server (Jitsu). The DNS server starts a unikernel and sends a DNS response back to the client while the unikernel is booting. When the client receives the DNS response it opens a TCP connection to the unikernel, which now has completed booting and is ready to respond to the TCP connection.

The unikernels are built using MirageOS, a library operating system that allows applications to be compiled directly to small Xen VMs. These unikernels only include the operating system components the application needs - nothing else is added. This results in very small VMs with low resource requirements that boot quickly.

Now, what if I wanted to use Jitsu to boot my unikernel website when someone accesses it? My website is fairly low traffic, so this could potentially save me some resource use and hosting costs. Unfortunately, there are always a few requests per hour to some of the more popular sections, which likely would make my unikernel run most of the time. But what if I could split my unikernel into even smaller unikernels? What if I went to an extreme and had one unikernel per URL? Then I could only boot unikernels for the URLs that are being used and they would only need to know how to serve a single page. This could also have a number of benefits, such as the ability to spin up multiple unikernels for an extremely popular web page and use DNS to direct clients to the unikernel that is closest to them — while keeping the rest of the site inactive (let's ignore web crawlers for now). If I had dynamic sections of my website there could also be security benefits: Every dynamic page would run as a separate VM. An attack on a single page would not have to bring down the rest of the site nor reveal any data stored in other unikernels.

Local MirageOS development with Xen and Virtualbox

MirageOS is a library operating system. An application written for MirageOS is compiled to an operating system kernel that only contains the specific functionality required by the application - a unikernel. The MirageOS unikernels can be compiled for different targets, including standalone VMs that run under Xen. The Xen unikernels can be deployed directly to common cloud services such as Amazon EC2 and Linode.

I have done a lot of MirageOS development for Xen lately and it can be inconvenient to have to rely on an external server or service to be able to run and debug the unikernel. As an alternative I have set up a VM in Virtualbox with a Xen server. The MirageOS unikernels then run as VMs in Xen, which itself runs in a VM in Virtualbox. With the "Host-only networking" feature in Virtualbox the unikernels are accessible from the host operating system, which can be very useful for testing client/server applications. A unikernel that hosts a web page can for example be tested in a web browser in the host OS. I am hoping that this setup may be useful to others so I am documenting it in this blog post.