Docker Breakout PoC

From: Ryan Bales 
------------------------------------------------------
Rough day for folks riding the bleeding edge of infrastructure.

http://stealth.openwall.net/xSports/shocker.c



Ryan Bales
http://twitter.com/#!/thinkt4nk
https://github.com/thinkt4nk

=============================================================== From: Ryan Bales ------------------------------------------------------ I guess it only affects pre-1.0, but I guess it proves that there's some value to giving tech time to mature before adopting whole-hog. Ryan Bales http://twitter.com/#!/thinkt4nk https://github.com/thinkt4nk

=============================================================== From: Jonathan Calloway ------------------------------------------------------ Proof that old school chroot jails are better??

=============================================================== From: flushy@flushy.net ------------------------------------------------------ Quoting Ryan Bales : I just had a co-worker try this on Fedora with Docker 1.0, and it didn't work. I notified some security teams, so we'll see what they say. It may have been fixed already. --b

=============================================================== From: flushy@flushy.net ------------------------------------------------------ BTW: https://access.redhat.com/site/solutions/965303 Apparently, the bug revolved around kernel "capabilities". The Docker team prior to Docker 1.0 via the docker-engine (0.11) disabled a selection of "caps" prior to passing control to the container. In version 0.12 of the docker-engine, they did the opposite: they disabled all kernel caps, then re-enabled the ones they wanted. The bug utilized a kernel cap that wasn't disabled (and wasn't needed by the container) in order to break out. Red Hat patched the docker-engine (v0.11) prior to release in RHEL 7 to strip all caps similar to the 0.12 v of the docker-engine. Thus, the docker that ships with RHEL 7 isn't affected by this bug. I'm not sure about other vendors... but I'd imagine they did similar things. --b Quoting Ryan Bales :

=============================================================== From: Dan Lyke ------------------------------------------------------ On Fri, 20 Jun 2014 14:02:58 +0000 flushy@flushy.net wrote: I had a Twitter exchange with one of the Docker developers, and he sent me to https://news.ycombinator.com/item?id=7909622 Please remember that at this time, we don't claim Docker out-of-the-box is suitable for containing untrusted programs with root privileges. So if you're thinking "pfew, good thing we upgraded to 1.0 or we were toast", you need to change your underlying configuration now. Add apparmor or selinux containment, map trust groups to separate machines, or ideally don't grant root access to the application. In other words, Docker is great for configuration encapsulation, and even uses like https://github.com/subuser-security/subuser where you have apps that need to run in containers but don't need root, but you shouldn't let root apps have willy-nilly access. Dan

=============================================================== From: Billy ------------------------------------------------------ The way I've understood it thus far (and I've only played a little): Docker is great for contained deployment. In that, you get a single package that you can deploy in Dev, QA, and Prod. I relate it to the way Mac Apps have most of their dependencies in the program.app folder that the program installs. Different lib versions? No problem. Conflicting package deps? No problem! Consistent deployment? No prob Deployment separation (devops installs, the developer does not)? no problem! I think security is a by-product. This is the OpenShift type of deployment, a way of hybrid virtualization. It's the same as vserver, but with a better API, better tools, better community, better native kernel support. This opens up a different way to use our machines. So while some people are looking for a sandbox, at least my clients are looking for a way for their devs to build, deploy, support their apps in a way that is flexible and scalable. Add to that: app or deployment automatic life cycle, rule based scaling, and rapid deployment, then you're talking "cloud". Additionally, Docker (and other container based abstraction) allows an admin to put more apps on their VM's and thus more on their bare metal servers than just pure virtualization would allow. This translates to density, which has value in real dollars. So, I think people need to sit back and decide exactly what problem they're trying to solve before they pick up a new shiny tech and throw it in their environment. --b Sent from my iPhone

=============================================================== From: Dave Brockman ------------------------------------------------------ -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Could you expand and explain this a little? I'm not following how Docker or any other container or abstraction allows more applications per VM. Regards, dtb - -- "Some things in life can never be fully appreciated nor understood unless experienced firsthand. Some things in networking can never be fully understood by someone who neither builds commercial networking equipment nor runs an operational network." RFC 1925 -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTqC6iAAoJEMP+wtEOVbcdpfQIAKDoRb6gsLayYAgJiC8cMO62 tQzKLm5kLun56lk5uzfwQWEAkeqpyWdXhA3fipuZ82PojQDmKwnTbGmNvSlVyS2J vgdGcMcrRS21Pq0G4fB2lsQc1u1QSCpnZdwIqO5AqCMYZIz7LAZKR1+mAOtcfIpG oiuAbX32ErsaXcNzzbBqHFpFIl8bEbWtnoWavQHN4i7BN9aQn4bgTku/dc7OYfkj oPcbABetQDIPPA9bxVzLWUeLNXm8x2PFqlB6/9zy6vQX9s7KuxvS9DOkYQbxFflU eJQX57az2nzwcoviVCccdXyLXdiUtDaOfa7m2zhBTQyzGWsElX6dSgJBbOE9cDQ= =9WG5 -----END PGP SIGNATURE-----

=============================================================== From: Billy ------------------------------------------------------ Let me rephrase that: It allows more applications per server than if you put those apps in their own VMs. You can deploy a docker container inside a bare metal server, or inside a vm under a hypervisor. It doesn't care, and it acts the same. The difference is in your flexibility: Bare metal with docker apps: bring up or tear down apps via docker. Outage or maintenance means migrating each app into another bare metal server. VM under hypervisor with docker apps: bring up or tear down as before, or migrate entire VMs to different hypervisors (vmotion or migration). Outage or maintenance means same migration path you use for your VMs -- for high availability VMs, this can be automatic. There are pros and cons for each: but the idea is using docker for application containment vs a VM means more density per machine as docker is light weight. Obviously, if you throw all the apps on the same OS on the same box, you'll get higher density, but you don't gain the benefits of the container deployment. --b Sent from my iPhone

=============================================================== From: Dave Brockman ------------------------------------------------------ -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I follow your explanation below. I do not see how that contributes to the above statement, unless you are stating that Docker is "lighter" than no container system at all. Flexibility, yes, absolutely, I see it clear as day. Improved density with additional container overhead, and additional resource consumption (all those specific libraries take up disk and memory), I can't see it. Maybe I need coffee.... Regards, dtb - -- "Some things in life can never be fully appreciated nor understood unless experienced firsthand. Some things in networking can never be fully understood by someone who neither builds commercial networking equipment nor runs an operational network." RFC 1925 -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTqDRlAAoJEMP+wtEOVbcdFYoIAJBgB8N5LJIDwSR5Xq/hAWH3 R2H3UvF5L2KXMTAnMksQvxK6cr9hOHLxearZesPDej/Rkn8jpe+CU/a4nfOGPgwg aA7gqMatgJr5L50FFbjHrkYxjZNZRHfYYhv2BBDXhhJZ9TVrx6wmjk5HbpoA8thR HT6vitX13pR6fzebN+j5wRU4xcLAN1nZayw6/EuVPu42ify3e4TfX6T57ZObiq3A Ww3bq1Bc+OZSKgHSgp/GGUIh7g8ihVh9XDbb6FcTxIxlJ75HOEqllP/C2Nihpr8i +aAlaIRVNUnF//PjdVZVYDVet3j4Zpb3H5+mwPCmXHhHrlWoF0tiZs3JgI3+7wM= =8WJa -----END PGP SIGNATURE-----

=============================================================== From: Billy ------------------------------------------------------ If you deploy each app via a vm instead of docker (or other container): You have to duplicate the kernel, the core libs, the driver stack, plus the hypervisor overhead for additional context switching and it's own magic. So, yes, a container has overhead, but it's overhead is far less than the overhead of a full additional vm stack. A container is like a chroot with it's own thread, plus some new kernel segregation support, and some automagic to give visibility to the core libs, network stack, and tool support for building, migration and importing. So yeah, it's way more lightweight than a new vm. --b Sent from my iPhone

=============================================================== From: Jonathan Calloway ------------------------------------------------------ When you say that docker containers house all of the resources an application needs, does this include things like databases? Also, while I like your OS X application bundle analogy, bundles often reach out of themselves to do things. For example, they often leave xml based preferences files (.plist) and often store things they need in Application Support folders (whether in system or user space). As far as I know, the default database that OS X uses is postgress, and some applications will take advantage of this. While this certainly dosen't effect an application's portability between Macs, how does this work with docker? Can an app only use a "flat" database? Or, if the app requires MySQL, for example, does it simply store the database inside itself, but use the MySQL binaries etc. outside of itself? -----Original Message----- From: chugalug-bounces@chugalug.org [mailto:chugalug-bounces@chugalug.org] On Behalf Of Billy Sent: Monday, June 23, 2014 10:20 AM To: Chattanooga Unix Gnu Android Linux Users Group Subject: Re: [Chugalug] Docker Breakout PoC If you deploy each app via a vm instead of docker (or other container): You have to duplicate the kernel, the core libs, the driver stack, plus the hypervisor overhead for additional context switching and it's own magic. So, yes, a container has overhead, but it's overhead is far less than the overhead of a full additional vm stack. A container is like a chroot with it's own thread, plus some new kernel segregation support, and some automagic to give visibility to the core libs, network stack, and tool support for building, migration and importing. So yeah, it's way more lightweight than a new vm. --b Sent from my iPhone

=============================================================== From: Billy ------------------------------------------------------ You can expose an external directory. Here is one example: http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/ Sent from my iPhone

=============================================================== From: Dan Lyke ------------------------------------------------------ On Tue, 24 Jun 2014 00:11:40 -0400 "Jonathan Calloway" wrote: So... If I understand it right, Docker is somewhere between a VM and a chroot jail. If you're not a long-term Un*x geek, chroot is a mechanism whereby you can change what a program sees as the root of a filesystem. So I might want to set up an alternate root somewhere down in /home/danlyke/untrusted/skype, put just enough /etc, /dev/, /bin, /lib etc, into that directory, and run Skype as though that were root. The tricky bits of doing this by hand are /dev, obviously, and dynamic linking. I believe that with its vm-ness, Docker adds some notion of networking encapsulation. So if you wanted a database in that space, you could either: 1. Use the network to access the database running on the host machine. 2. Use some sort of symlink or remapping space to put the Un*x sockets access files in a place that the chrooted files could get to them, and run the database on the host machine. 3. Run the database in the Docker container, along-side your untrusted app. But... from a purely practical level, if you don't need a shared database in your app then SQLite is an awesome solution. Your app gets an SQL database, it can statically link, and gets one data file which you can read with other tools for debugging purposes. Dan