coreOS: Open source Linux container for massive server deployments

Wouldn’t it be nice if you can manage your infrastructure in the same way that Google and other huge web companies do? Now with CoreOS, that’s actually attainable. You get Linux containers to manage your services more reliably, consistently, and securely, as you won’t have to install packages through apt or yum. Your single service’s code and all dependencies are packaged within a container which you can run on one or more CoreOS machines.

With Linux containers, you get the same benefits as complete virtual machines, except you can focus on applications instead of entire virtualized hosts. The containers don’t run their own Linux kernel and they don’t need a hypervisor, so there’s virtually no performance overhead. This lets you gain density, which in turn results in fewer machines to operate and a reduced compute spend.


  • Docker: This is the Linux container engine where you run your code and applications. It’s installed on each CoreOS machine. You’ll need to create a container for each service, such as your web server, caching, or database.
  • Fleet: This is the tool you use to run your docker containers on your CoreOS. With it, you can deploy high availability services because you can make sure that your service containers aren’t on the same machine, availability zone, or region.
  • etcd: This is for service discovery and for reading and writing config values. It’s replicated, so all the changes are reflected across the entire cluster.
  • Security: Updates are automatic for security. Every update is signed offline from the build process. Every update’s metadata are transmitted to the machine through an SSL cert signed by the CoreOS private CA.

Related Posts