Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

June 15, 2017

The best open source solution for microservices networking



Introduction

There a big (justified) hype around containers and microservices.
Indeed, many people speak about the subject but few have implemented a real project.
There is also a lot of excellent resources on the web, so there is no need for my additional contribution.
I just want to offer my few readers another proof that a great solution exists for containers networking, and it works well.
You will find evidences in this post and pointers to resources and tutorials.
I will explain it in very basic terms, as I did for Cisco ACI and SDN, because I’m not talking to network specialists (you know I’m not either) but to software developers and designers. 
Most of the content here is reused from my sessions at Codemotion 2017 in Rome and Amsterdam (you can see the recording on youtube). 

Containers Networking

When the world moved from bare metal servers to virtual machines, virtual networks were also created and added great value (plus some need for management).


physical hosts connected to a network
Initially networking was simple

Of course virtual networks make the life of developers and servers managers easier, but they also add complexity for network managers: now there are two distinct networks that need to be managed and integrated.
You cannot simply believe that an overlay network runs on a physical dumb pipe with infinite bandwidth, zero latency and no need for end to end troubleshooting.


Virtual Machines connected to an overlay network
Virtual Machines connected to an overlay network

With the advent of containers, their virtual networking layered on top of the VM virtual network (the majority of containers run inside VM for a number of reasons), though there are good examples of container runtime on physical hosts.
So now you have 3 network layers stacked on top of each other, and a need to manage the network end to end that makes your work even more complex.


Containers inside VM: many layers of overlay networks
Containers inside VM: many layers of overlay networks

This increased abstraction creates some issues when you try to leverage the value of resources in the physical environment:
- connectivity: it's difficult to insert network services, like load balancers and firewalls, in the data path of microservices (regardless the virtual or physical nature of the appliances).
- performances: every overlay tier brings its own encapsulation (e.g. vxlan). Encapsulation over encapsulation over encapsulation starts penalizing the performances... just a little  ;-)
- hardware integration: some advanced features of your network (performances optimization, security) cannot be leveraged
Do not despair: we will see that a solution exists for this mess.

Microservices Networking


This short paragraph describes the existing implementation of the networking layer inside the containers runtime.
Generally it is based on a pluggable architecture, so that you can use a plugin that is delegated by the container engine to manage the container's traffic. You can choose among a number of good solutions from the open source community, including the default implementation from Docker.

Minimally the networking layer provides:
- IP Connectivity in Container’s Network Namespace
- IPAM, and Network Device Creation (eth0)
- Route Advertisement or Host NAT for external connectivity



containers networking
The networking for containers

There are two main architectures that allow to plug an external implementation for networking: CNM and CNI. Let's have a look at them.

The Container Network Model (CNM)    


Proposed by Docker to provide networking abstractions/API for container networking. 
It is based on the concept of a Sandbox that contains configuration of a container's network stack (Linux network namespace).
An endpoint is container's interface into a network (a couple of virtual ethernet interfaces).  
A network is collection of arbitrary endpoints that can communicate. 
A container can belong to multiple endpoints (and therefore multiple networks).
  
CNM allows for co-existence of multiple drivers, with a network managed by one driver   
Provides Driver APIs for IPAM and for Endpoint creation/deletion.  
- IPAM Driver APIs: Create/Delete Pool, Allocate/Free IP Address Network   
- Driver APIs: Network Create/Delete, Endpoint Create/Delete/Join/Leave  

Used by docker engine, docker swarm, and docker compose.  
Also works with other schedulers that runs standard containers e.g. Nomad or Mesos.


Container Network Model

Container Network Model


 

The Container Network Interface (CNI)

 

Proposed by CoreOS as part of appc specification, used also by Kubernetes. 
Common interface between container run time and network plugin.
Gives driver freedom to manipulate network namespace.
Network described by JSON configuration.

Plugins support two commands: 
- Add Container to Network 
- Remove Container from Network     

Container Network Interface

Container Network Interface

Many good implementations of the models above are available on the web.
You can pick one to complement the default implementation with a more sophisticated solution and benefit from better features.  

It looks so easy on my laptop. Why is it complex?


When a developer sets up the environment on its laptop, everything is simple.
You test your code and the infrastructure just works (you can also enjoy managing... the infrastructure as code).
No issues with performances, security, bandwidth, logs, conflicts on resources (ip address, vlan, names…).
But when you move to an integration test environment, or to a production environment, it’s no longer that easy.
IT administrators and the operations team are well aware of the need for stability, security, multi tenancy and other enterprise grade features.
So not all solutions are equal, especially for networking. Let's discuss their impact on Sally and Mike:





Sally (software developer) - she expects:
Develop and test fast 
Agility and Elasticity 
Does not care about other users




Mike (IT Manager) - he cares for: 
Manage infrastructure 
Stability and Security 
Isolation and Compliance 

These conflicting goals and priorities challenge the collaboration and the possibility to easily adopt a DevOps approach.
A possible solution is a Policy-based Container Networking.
Policy based management is simpler thanks to Declarative Tags (used instead of complex commands syntax), and it is faster because you manage Groups of resources instead of single objects (think of the cattle vs pets example).

What is Contiv


Contiv unifies containers, VMs, and bare metal servers with a single networking fabric, allowing container networks to be addressable from VM and bare-metal network endpoints.  Contiv combines strong network performance, support for industry-leading hardware, and an application-oriented policy that can move across networks together with the application.

Contiv's goal is to manage the "operational intent" of your deployment in a declarative way, as you generally do for the "application intent" of your microservices. This allows for a true infrastructure as code management and easy implementation of DevOps practices.



 


Contiv provides an IP address per container and eliminates the need for host-based port NAT. It works with different kinds of networks like pure layer 3 networks, overlay networks, and layer 2 networks, and provides the same virtual network view to containers regardless of the underlying technology. 

Contiv works with all major schedulers like Kubernetes, Docker Swarm, Mesos and Nomad. These schedulers provide compute resources to your containers and Contiv provides networking to them. Contiv supports both CNM (Docker networking Architecture) and CNI (CoreOS and Kubernetes networking architecture). 
Contiv has L2, L3 (BGP), Overlay (VXLAN) and ACI modes. It has built in east-west service load balancing. Contiv also provides traffic isolation through control and data traffic. 
It manages global resources: IPAM, VLAN/VXLAN pools.


Contiv Architecture


Contiv is made of a master node and an agent that runs on every host of your server farm:

Contiv and its clustered architecture
Contiv's support for clustered deployments

The master node(s) offer tools to manipulate Contiv objects. It is called Netmaster and implements CRUD (create, read, update, delete) operations using a REST interface. It is expected to be used by infra/ops teams and offers RBAC (role based access control).

The host agent (Netplugin) implements cluster-wide network and policy enforcement. It is stateless: very useful in case of a node failure/restart and upgrade.

A command line utility (that is a client of the master's REST API) is provided: it's named netctl.

Contiv and its cluster wide architecture
Contiv's architecture


Examples


Learning Contiv is very easy: from the Contiv website there is a great tutorial that you can download and run locally.
For your convenience, I executed it on my computer and copied some screenshots here, with my comments to explain it step by step.

First, let's look at normal docker networks (without Contiv) and how you create a new container and connect it to the default network:



Networks in Docker
Networks in Docker


You can inspect the virtual bridge (in the linux server) that is managed by Docker: look at the IPAM section of the configuration and its Subnet, then at the vanilla-c container and its ip address.


How Docker sees its networks
How Docker sees its networks


You can also look at the network config from within the container:


the network config from within the container

Now we want to create a new network with Contiv, using its netctl command line interface:

Contiv's netctl command line interface
Contiv's netctl command line interface

Here you can see how Docker lists and uses a Contiv network:


how Docker lists and uses a Contiv network

Look at the IPAM section, the name of the Driver, the name of the network and of the tenant:



We now connect a new container to the contiv-net network as it is seen by Docker: the command is identical when you use a network created by Contiv.



Multi tenancy

You can create a new Tenant using the netctl tenant create command:

Creating tenants in Contiv
Creating tenants in Contiv

A Tenant will have its own networks, that can overlap other tenants' network names and even their subnets: in the example below, the two networks are completely isolated and the default tenant and the blue tenant ignore each other - even though the two networks have the same name and use the same subnet
Everything works as if the other network did not exist (look at the "-t blue" argument in the commands).

Two different networks, with identical name and subnet, can exist in different tenants
Two different networks, with identical name and subnet


Let’s attach a new container to the contiv-net network in the blue tenant (the tenant name is explicitly used in the command, to specify the tenant's network):


All the containers connected to this network will communicate. The network extends all across the cluster and benefits of all the features of the Contiv runtime (see the website for a complete description).


The policy model: working with Groups


Contiv provides a way to apply isolation policies among containers groups (regardless of the tenants, eventually within the tenants).  To do that we create a simple policy called db-policy, then we associate the policy to a group (db-group, that will contain all the containers that need to be treated the same) and add some rules to the policy to define which ports are allowed.

Creating a policy in Contiv
Creating a policy in Contiv


(click on the images to zoom in)

Adding rules to a policy
Adding rules to a policy


Finally, we associate the policy with a group (a group is an arbitrary collection of containers, e.g. a tier for a microservice) and then run some containers that belong to db-group:


Creating a group in Contiv, so that many containers get the same policies
Creating a group




The policy named db-policy (defining, in this case, what ports are open and closed) is now applied to all the 3 containers: managing many end points as a single object makes it easy and fast, just think about auto-scaling (especially when integrated with Swarm, Kubernetes, etc.).

The tutorial shows many other interesting features in Contiv, but I don't want to make this post too long  :-)


Features that make Contiv the best solution for microservices networking


  • Support for grouping applications or applications' components.
  • Easy scale-out: instances of containerized applications are grouped together and managed consistently.
  • Policies are specified on a micro-service tier, rather than on individual container workloads.
  • Efficient forwarding between microservice tiers.
  • Contiv allows for a fixed VIP (DNS published) for a micro-service
  • Containers within the micro-services can come and go fast, as resource managers auto-scale them, but policies are already there... waiting for them.
  • Containers' IP addresses are immediately mapped to the service IP for east-west traffic.
  • Contiv eliminates the single point of forwarding (proxy) between micro-service tiers.
  • Application visibility is provided at the services level (across the cluster).
  • Performances are great (see references below).
  • It mirrors the policy model that made Cisco ACI an easy and efficient solution for SDN, regardless the availability of an ACI fabric (Contiv also works with other hw and even with all-virtual networks).

I really invite you to have a look and test it yourself using the tutorial

It's easy and not invasive at all, seeing is believing.


October 20, 2015

DevOps, Docker and Cisco ACI - part 2

This post is a follow up to the initial discussion of the DevOps approach based on Linux containers (specifically with Docker).
Here I elaborate on the advantage provided by Cisco ACI (and some more projects in the open source space) when you work with containers.

Policies and Containers  

Cisco ACI offers a common policy model for managing IT operations.
It is agnostic: bare metal, virtual machines, and containers are treated the same, offering a unified policy language: one clear security model, regardless of how an application is deployed.


ACI models how components of an application interact, including their requirements for connectivity, security, quality of service (e.g. reserved bandwidth for a specific service), and network services.   ACI offers a clear path to migrate existing workloads towards container-based environments without any changes to the network policy, thanks to two main technologies:
  • ACI Policy Model and OpFlex   
  • Open vSwitch (OVS)   

OpFlex is a distributed policy protocol that allows application-centric policies to be enforced within a virtual switch such as OVS.
Each container can attach to an OVS bridge, just as a virtual machine would, and the OpFlex agent helps ensure that the appropriate policy is established within OVS (because it's able to communicate with the Controller, bidirectionally). 



The result of this integration is the ability to build and manage a complete infrastructure that spans across physical, virtual, and container-based environments.
Cisco plans to release support for ACI with OpFlex, OVS, and containers before the end of 2015.


Value added from Cisco ACI to containers

I will explain how ACI supports the main two networking types in Docker: veth and macvlan.
This can be done already, because it's not based only on Opflex. 

Containers Networking option 1 - veth

vEth is the networking mode that leads to virtual bridging with br0 (a linux bridge, the default option with Docker) or OVS (Open Virtual Switch, usually adopted with KVM and Openstack).
As a premise, I remind you that ACI manages the connectivity and the policies for bare metal servers, VMs on any hypervisor and network services (LB, FW, etc.) consistently and easily:

Cisco ACI as the any to any network integration in the data center

On top of that, you can add containers running on bare metal Linux servers or inside virtual machines (different use cases make one of the options preferred, but from a ACI standpoint it's the same):


That means that applications (and the policies enabling them) can span across any platform: servers, VM, containers at the same time. Every service or component that makes up the application can be deployed on the platform that is more convenient for it in terms of scalability, reliability and management:


And the extension of the ACI fabric to virtual networks (with the Opflex-enabled OVS switch) allows applying the policies to any virtual End Point that uses virtual ethernet, like Docker containers configured with the veth mode.

ACI model brought to all workloads at the same time: Docker, VM, bare metal


Advantages from ACI with Docker veth:

With this architecture we can get two main results:
- Consistency of connectivity and services policy between physical, virtual and/or container (LXC and Docker);
- Abstraction of the end-to-end network policy for location independence altogether with Docker portability (via shared repositories) 

Containers Networking option 2 - macvlan

MACVLAN does not bring a network bridge for the ethernet side of a Docker container to connect.
You can think  MACVLAN as a hypotetical cable where one side is the eth0 at the Docker and the other side is the interface on the physical switch / ACI leaf.
The hook between them is the VLAN (or the trunked VLAN) in between.
In short, when specifying a VLAN with the MACVLAN, you tell a container binding on eth0 on Linux to use the VLAN XX (defined as access or trunked).
The connectivity will be “done” when the match happens with the other side of the cable at the VLAN XX on the switch (access or trunk).

At this point you can match vlans with EPG (End Point Groups) in ACI, to build policies that group containers as End Points needing the same treatment, i.e. applying Contracts to the groups of Containers:


Advantages from ACI with Docker macvlan:

This configuration provides two advantages (the first one is common to veth):
- Extend the Docker containers based portability for applications through the independence of ACI policies from the server's location.
- Performance increase on network throughput from 5% to 15% (mileage varies, further tuning and tests will provide more detail) because there’s no virtual switching consuming CPU cycles on the host.



Intent based approach

A new intent based approach is making its way in networking. An intent based interface enables a controller to manage and direct network services and network resources based on describing the “Intent” for network behaviors. Intents are described to the controller through a generalized and abstracted policy semantics, instead of using Openflow-like flow rules. The Intent based interface  allows for a descriptive way to get what is desired from the infrastructure, unlike the current SDN interfaces which are based on describing how to provide different services. This interface will accommodate orchestration services and network and business oriented SDN applications, including OpenStack Neutron, Service Function Chaining, and Group Based Policy.

Docker plugins for networks and volumes

Cisco is working at a open source project that aims at enabling intent-based configuration for both networking and volumes. This will exploit the full ACI potential in terms of defining the system behavior via policies, but will work also with non-ACI solutions. 
Contiv netplugin is a generic network plugin for Docker, designed to handle networking use cases in clustered multi-host systems.
It's still work in progress, detail can't be shared at this time but... stay tuned to see how Cisco is leading also in the open source world.



Mantl: a Stack to manage Microservices  

And, just for you to know, another project that Cisco is delivering is targeted at the lifecycle and the orchestration of microservices.
Mantl has been developed in house, as a framework to manage the cloud services offered by Cisco. It can be used by everyone for free under the Apache License.
You can download Mantl from github and see the documentation here.

Mantl allows teams to run their services on any cloud provider. This includes bare metal services, OpenStack, AWS, Vagrant and GCE. Mantl uses tools that are industry-standard in the DevOp community, including Marathon, Mesos, Kubernetes, Docker, Vault and more.
Each layer of Mantl’s stack allows for a unified, cohesive pipeline between support, managing Mesos or Kubernetes clusters during a peak workload, or starting new VMs with Terraform. Whether you are scaling up by adding new VMs in preparation for launch, or deploying multiple nodes on a cluster, Mantl allows for you to work with every piece of your DevOps stack in a central location, without backtracking to debug or recompile code to ensure that the microservices you need will function when you need them to.

When working in a container-based DevOps environment, having the right microservices can be the difference between success and failure on a daily basis. Through the Marathon UI, one can stop and restart clusters, kill sick nodes, and manage scaling during peak hours. With Mantl, adding more VMs for QA testing or live use is simple with Terraform — without one needing to piece together code to ensure that both pieces work well together without errors. Addressing microservice conflicts can severely impact productivity. Mantl cuts down time spent working through conflicts with microservices so DevOps can spend more time working on an application.






Key takeaways

ACI offers a seamless policy framework for application connectivity for VM, physical hosts and containers.

ACI integrates Docker without requiring gateways (otherwise required if you build the overlay from within the host) so Virtual and Physical can be merged in the deployment of a single application.

Intent based configuration makes networking easier. Plugins for enabling Docker to intent based configuration and integration with SDN solutions are coming fast.

Microservices are a key component of cloud native applications. Their lifecycle can be complicated, but tools are emerging to orchestrate it end to end. Cisco Mantl is a complete solution for this need and is available for free on github.


References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject:

https://docs.docker.com/userguide/
https://docs.docker.com/articles/security/
https://docs.docker.com/articles/networking/   
http://www.dedoimedo.com/computers/docker-networking.html
https://mesosphere.github.io/presentations/mug-ericsson-2014/  
Exploring Opportunities: Containers and OpenStack
ACI for Simple Minds
http://www.networkworld.com/article/2981630/data-center/containers-key-as-cisco-looks-to-open-data-center-os.html
http://blogs.cisco.com/datacenter/docker-and-the-rise-of-microservices    
ACI and Containers white paper 
Cisco and Red Hat white paper   
Opendaylight and intent
Intent As The Common Interface to Network Resources
Mantl Introduces Microservices as a Stack
Project mantl



October 9, 2015

DevOps, Docker and Cisco ACI - part 1

In this post I try to describe the connection between the need for a fast IT, the usage of Linux containers to quicky deploy cloud native applications and the advantage provided by Cisco ACI in containers' networking.
The discussion is split in two posts to make it more... agile.
A big thank you to Carlos Pereira (@capereir), Frank Brockners (@brockners) and Juan Lage (@JuanLage) that provided content and advice on this subject. 

DevOps – it’s not tooling, it’s a process optimization


I will not define DevOps again, you can find it in this post and in this book.
I just want to remind that it’s not a product or a technology, but it’s a way of doing things.
Its goal is to bring fences down between the software development teams and the operations team, streamlining the flow of a IT project from development to production.
Steps are:
  1. alleviate bottlenecks (systems or people) and automate as much as possible, 
  2. feed information back so problems are solved by desing in next iteration, 
  3. iterate as often as possible (continuous delivery).



Business owners push the IT to deliver faster, and application development via DevOps is changing the behavior of IT.
Gartner defined the Bimodal IT as the parallel management of cloud native applications (DevOps) and more mature systems that require consolidated best practices (like ITIL) and tools supporting their lifecycle.



One important aspect of DevOps is that the infrastructure must be flexible and provisioned on demand (and disposed when no longer needed). So, if it is programmable it fits much better in this vision. 

 

Infrastructure as code

Infrastructure as code is one of the mantra of DevOps: you can save the definition of the infrastructure (and the policies that define its behavior) in source code repository, as well as you do with the code for your applications.
In this way you can automate the build and the management very easily.

There are a number of tools supporting this operational model. Some examples:



One more example of tool for DevOps is the ACI toolkit, a set of python libraries that expose the ACI network fabric to DevOps as a code library.

 
You can download it from:


The ACI Toolkit exposes the ACI object model to programming languages so that you can create, modify and manage the fabric as needed.
Remember that one of the most important advantage of Cisco’s vision of SDN is that you can manage the entire system as a whole.
No need to configure or manage single devices one by one, like other approaches to SDN (e.g. Openflow).




So you can create, modify and delete all of the following objects and their relationships:




Linux Containers and Docker


Docker is an open platform for Sys Admins and developers to build, ship and run distributed applications.  Applications are easy and quickly assembled from reusable and portable components, eliminating the silo-ed approach between development, QA, and production environments.

Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps.

At a high-level, Docker is built of:
- Docker Engine: a portable and lightweight, runtime and packaging tool
- Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that's beyond the basic overview I'm giving here.



Docker’s main purpose is the lightweight packaging and deployment of applications.   

Containers are lightweight, portable, isolated, self-sufficient "slices of a server" that contain any application (often they contain microservices).
They deliver on full DevOps goal:
- Build once… run anywhere (Dev, QA, Prod, DR).
- Configure once… run anything (any container).  

Processes in a container are isolated from processes running on the host OS or in other Docker containers.
All processes share the same Linux kernel.
Docker leverages Linux containers to provide separate namespaces for containers, a technology that has been present in Linux kernels for 5+ years. The default container format is called libcontainer. Docker also supports traditional Linux containers using LXC.
It also uses Control Groups (cgroups), which have been in the Linux kernel even longer, to implement resources (such as CPU, memory, I/O) auditing and limiting, and Union file systems that support layering of the container's file system.

 

Kernel namespaces isolate containers, avoiding visibility between containers and containing faults.   Namespaces isolate:
◦     pid (processes)
◦     net (network interfaces, routing)
◦     ipc (System V interprocess communication [IPC])
◦     mnt (mount points, file systems)
◦     uts (host name)
◦     user (user IDs [UIDs])    

Containers or Virtual Machines


Containers are isolated, portable environments where you can run applications along with all the libraries and dependencies they need.
Containers aren’t virtual machines. In some ways they are similar, but there are even more ways that they are different. Like virtual machines, containers share system resources for access to compute, networking, and storage. They are different because all containers on the same host share the same OS kernel, and keep applications, runtimes, and various other services separated from each other using kernel features known as namespaces and cgroups.
Not having a separate instance of a Guest OS for each VM saves space on disk and memory at runtime, improving also the performances.
Docker added the concept of a container image, which allows containers to be used on any host with a modern Linux kernel. Soon Windows applications will enjoy the same portability among Windows hosts as well.
The container image allows for much more rapid deployment of applications than if they were packaged in a virtual machine image.



Containers networking

When Docker starts, it creates a virtual interface named docker0 on the host machine.
docker0 is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it.
For every new container, Docker creates a pair of “peer” interfaces: one “local” eth0 interface and one unique name (e.g.: vethAQI2QT), out in the namespace of the host machine.
Traffic going outside is NATted




You can create different types of networks in Docker:

veth: a peer network device is created with one side assigned to the container and the other side is attached to a bridge specified by the lxc.network.link.   

vlan: a vlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.

phys:  an already existing interface specified by the lxc.network.link is assigned to the container.

empty: will create only the loopback interface (at kernel space).

macvlan:  a  macvlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.  It also specifies the mode the macvlan will use to communicate between  different macvlan on the same upper device.  The accepted modes are: private, Virtual Ethernet Port Aggregator (VEPA) and bridge

Docker Evolution - release 1.7, June 2015  

Important innovation has been introduced in the latest release of Docker, that is still experimental.

Plugins  

A big new feature is a plugin system for Engine, the first two available are for networking and volumes. This gives you the flexibility to back them with any third-party system.
For networks, this means you can seamlessly connect containers to networking systems such as Weave, Microsoft, VMware, Cisco, Nuage Networks, Midokura and Project Calico.  For volumes, this means that volumes can be stored on networked storage systems such as Flocker.

Networking  

The  release includes a huge update to how networking is done.



Libnetwork provides a native Go implementation for connecting containers.  The goal of libnetwork is to deliver a robust Container Network Model that provides a consistent programming interface and the required network abstractions for applications.
NOTE: libnetwork project is under heavy development and is not ready for general use.
There are many networking solutions available to suit a broad range of use-cases. libnetwork uses a driver / plugin model to support all of these solutions while abstracting the complexity of the driver implementations by exposing a simple and consistent Network Model to users.

Containers can now communicate across different hosts (Overlay Driver).  You can now create a network and attach containers to it.

Example:
docker network create -d overlay net1    
docker run -itd --publish-service=myapp.net1 debian:latest  

Orchestration and Clustering for containers  

Real world deployments are automated, single CLI commands are less used. Most important orchestrators are Mesos/Marathon, Google Kubernetes, Docker Swarm
Most use JSON or YAML formats to describe an application: a declarative language that says what an application looks like.
That is similar to ACI declarative language with high level abstraction to say what an application needs from the network, and have a network implement it.
This validates Cisco’s vision with ACI, very different from the NSX's of the world.

Next post explains the advantage provided by Cisco ACI (and some other projects in the open source space) when you use containers.


References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject:

https://docs.docker.com/userguide/
https://docs.docker.com/articles/security/
https://docs.docker.com/articles/networking/   
http://www.dedoimedo.com/computers/docker-networking.html
https://mesosphere.github.io/presentations/mug-ericsson-2014/ 
http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/
Exploring Opportunities: Containers and OpenStack
ACI for Simple Minds
http://www.networkworld.com/article/2981630/data-center/containers-key-as-cisco-looks-to-open-data-center-os.html
http://blogs.cisco.com/datacenter/docker-and-the-rise-of-microservices    
ACI and Containers white paper 
Cisco and Red Hat white paper    

Some content from the Docker documentation reused based on the Apache 2 License.