Listen to Your Favourite Programming Podcasts and Books on Audible - Here is a Free Trial - Click Here - Offer Limited Time Only
Unikernels with OPS running is faster and more secure than Docker Containers

Unikernels with OPS running is faster and more secure than Docker Containers

Ian Eyberg · Mar 1, 2019 00:00 · 1067 words · 6 minute read

Unikernels are an emerging deployment pattern that many companies are starting to dive into headfirst, and they are seeing some outstanding results.. Ulrich Drepper, the main force behind libc, is also behind a project at Redhat called UKL while IBM has been busily filing patents. Researchers over at NEC are reporting boot times of 5ms! Just for comparison calling fork is roughly 3ms while booting a docker container is over 100ms. There are now well over ten different implementations of unikernels across the spectrum, most of which are open source. Some are what we call ‘purist’ meaning they only work with one language and some are what we call ‘POSIX’ style meaning they don’t care what the language is.

So why haven’t we seen much developer adoption yet? There are a few reasons but one reason in particular is if you look into the people behind unikernels at these companies they all have titles like research scientist, distinguished fellow, etc. In a word they remain out of reach to the common developer since because of their low level nature where you have to find the correct libc version or patch the right native library extension or twiddle some linker flags. All of this and more that your average developer should not have to wade through just to boot a hello world. In other words there was not yet a tool that could easily create and build these with ease. However, their performance, security and server density benefits are undeniable and for any one of those reasons alone we see massive adoption coming down the pipeline.

So this is a problem that we are looking at solving with OPS. OPS allows developers to build and run unikernels easily with only a single command.

Want to see OPS in action? All you need is a linux or a mac to work on.

First thing you want to do is download and install OPS via:

curl https://ops.city/get.sh -sSfL | sh

If you are the type that doesn’t like to download binaries like this feel free to check out https://github.com/nanovms/ops and build from source - it’s written in Go.

First thing we’ll do is create a project directory:

mkdir pytest && cd pytest

Then put this in hi.py:

print "yo"

Now we’ll load a python package. This is a loose analogy to debian style packages because at the end of the day python comes with all sorts of stuff besides the interpreter itself.

ops load python_2.7.15rc1 -a hi.py

What we do is download the python 2.7 package and specify the file we want to execute - in this case hi.py.

Our result should look something like this:

Extracting /Users/eyberg/.ops/packages/python_2.7.15rc1.tar.gz to /Users/eyberg/.ops/.staging/python_2.7.15rc1
[python hi.py]
booting /Users/eyberg/.ops/images/python_2.7.15rc1/python.img ...
assigned: 10.0.2.15
yo
exit_group
exit status 1

Cool - you just built and run your first python unikernel.

Let’s try something different - this time we’ll run a python webserver.

Put this into hi.py:

from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer

PORT_NUMBER = 8083

#This class will handles any incoming request from
#the browser
class myHandler(BaseHTTPRequestHandler):

    #Handler for the GET requests
    def do_GET(self):
        self.send_response(200)
        self.send_header('Content-type','text/html')
        self.end_headers()
        # Send the html message
        self.wfile.write("Hello World !")
        return

try:
    #Create a web server and define the handler to manage the
    #incoming request
    server = HTTPServer(('', PORT_NUMBER), myHandler)
    print 'Started httpserver on port ' , PORT_NUMBER

    #Wait forever for incoming http requests
    server.serve_forever()

except KeyboardInterrupt:
    print '^C received, shutting down the web server'
    server.socket.close()

This script uses python’s built in webserver to serve up requests. One thing you might notice is that since we are running these inside of a virtual machine we don’t care whether you are on linux or a mac so there’s no need to install a matching python version locally on your machine. We’ll prove that here in a second.

But first let’s run this:

ops load python_2.7.15rc1 -p 8083 -a hi.py

This time you can see we are specifying the port number 8083. You should see this as a result:

➜  pytest  ops load python_2.7.15rc1 -p 8083 -a hi.py
Extracting /Users/eyberg/.ops/packages/python_2.7.15rc1.tar.gz to /Users/eyberg/.ops/.staging/python_2.7.15rc1
[python hi.py]
booting /Users/eyberg/.ops/images/python_2.7.15rc1/python.img ...
assigned: 10.0.2.15
Started httpserver on port  8083
10.0.2.2 - - [27/Feb/2019 18:22:37] "GET / HTTP/1.1" 200 -
10.0.2.2 - - [27/Feb/2019 18:22:39] "GET / HTTP/1.1" 200 -

Then we can confirm networking is working by issuing a few requests against it.

➜  ~  curl -XGET http://127.0.0.1:8083/
Hello World !%                                                                                                                                                                                              ➜  ~  curl -XGET http://127.0.0.1:8083/
Hello World !%

You’ll notice by default OPS uses what is referred to as user-mode networking. This is fine for dev, test, and even staging environments but not something we’d use in production for a variety of reasons. For one it works like nat translation where it’ll only map the ports specified to the guest - in this case your linux or mac - that’s why you can hit it up with the loopback address. For production environments as if you would deploy to GCE or AWS it’ll use their underlying networking that utilizes bridge and tap devices. This gives you an idea of how the major public clouds are actually built and ran.

Ok for our third and final example let’s run a python 3 program.

Put this in your hi.py:

print "yo"

If we run it we see an error:

➜  pytest  ops load python_3.6.7 -a hi.py

Extracting /Users/eyberg/.ops/packages/python_3.6.7.tar.gz to /Users/eyberg/.ops/.staging/python_3.6.7
[python3 hi.py]
booting /Users/eyberg/.ops/images/python_3.6.7/python3.img ...
assigned: 10.0.2.15
  File "hi.py", line 1
    print "yo"
             ^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("yo")?
exit_group
exit status 1

That’s because python 3 requires parentheses for print. Let’s fix that and try again.

print("yo")

Now when we run it we see our expected result:

➜  pytest  ops load python_3.6.7 -a hi.py

Extracting /Users/eyberg/.ops/packages/python_3.6.7.tar.gz to /Users/eyberg/.ops/.staging/python_3.6.7
[python3 hi.py]
booting /Users/eyberg/.ops/images/python_3.6.7/python3.img ...
assigned: 10.0.2.15
yo
exit_group
exit status 1

Cool! I hope this gives you a nice introduction to unikernels. Unikernels are faster and safer than linux and containers and we’ll see more continued adoption in the future. I was at a unikernel conference in Beijing last year where Alibaba was describing how they were looking at integrating it into their underlying infrastructure and serverless platform. We feel unikernels are a very powerful force, not just in cloud deployments, but also for edge deployments and other various use-cases.

Still interested? We have open-sourced OPS here. So fork it, star it, clone it and let me know what you end up building!

Thanks for reading!