Showing posts with label Openstack. Show all posts
Showing posts with label Openstack. Show all posts

September 19, 2016

Infrastructure as Code with Terraform and Cisco Metapod


Recently I worked with a customer to explore the concept of Infrastructure as Code.
They like open source solutions for the automation of the infrastructure and for managing the software applications life cycle.
To reach the first objective their goal is a private cloud based on Openstack, while they will use Ansible and Terraform to manage the environments for different projects.

Managing the Infrastructure as Code means that the definition of the infrastructure is maintained in text files, that could be stored in a version control system like you do with the source code of the application.
If you do that, the same lifecycle applies to both the infrastructure and the application: creation of staging or production environments, automated testing, etc.
Using blueprints helps to improve the quality of the final result of the project and grants compliance with policies and eventual legal obligations.
Benefits include speed, cost savings (avoiding a static allocation of pre-provisioned resources) and risk reduction (removing errors and security violations).
Terraform is one of the best open source tools to manage your Infrastructure as Code: it’s easy to install, learn and use (one hour).
You could start from tutorials and free examples available on Internet.

Here is an example of full automation (we'll try to get a little better result):



As a first step, to make the usage of Openstack easier on a large scale, we discussed the value of a managed service.
If the IT organization could just focus their effort on the development and operations of the business applications, instead of running the infrastructure, they would create more value for the internal customers (company's lines of business).
So I proposed the adoption of Cisco Metapod, that is Openstack as a managed service (delegation of all the tough administrative and operational work to a specialized 3rd party, while you just use the Openstack user interface and API enjoying a SLA of 99.99% uptime). 
I have described the advantage of adopting Openstack as a managed service in this post: Why don't you try Openstack (without getting your hands dirty)? 
Services offered by Cisco Metapod around Openstack
Services offered by Cisco Metapod

We created a lab where Openstack abstracts the resources from the physical and virtual infrastructure (etherogeneous servers, network and storage) and the configuration of different environments is managed by Terraform, so that you can create, destroy, restore and update a complex system in few minutes.


Free to use Openstack for your apps, instead of managing it: focus on your real business

With Terraform you can describe the architecture in a declarative form (in a manifest file).
You simply describe what you need (the desired state), not how the different components (devices and software) must be configured with all their parameters and their specific syntax.
The goal of Terraform is to match the current state of the system with the desired state.


Desired State vs Current State


Terraform is used to create, manage, and manipulate infrastructure resources. Examples of resources include physical machines, VMs, network switches, containers, etc. Almost any infrastructure noun can be represented as a resource in Terraform. Terraform is agnostic to the underlying platforms by supporting providers. A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Atlas, DNSimple, CloudFlare). 

Infrastructure as Code
Infrastructure as Code




Desired State

In my lab I reused a good example that I found on GitHub at https://github.com/berendt/terraform-configurations 
It contains all the resources you need to deploy a new Devstack instance (a all-in-one instance of Openstack, useful for developers) including the needed networks, public addresses, firewall rules on a target cloud platform. That, incidentally, is a Openstack instance (so we are deploying Openstack on Openstack).

Here is the content of the main.tf file used by Terraform: it references variables with the format ${variable_name}, including the output from actions on other resources. Dependencies between resources are managed automatically by Terraform. A separate file can contain the predefined values for your variables (like the references to the Openstack lab in my example).

If you are not interested in the content of this file (I guess it applies to 70% of my readers) you can skip it and go to next picture... there is also a good recorded demo down there   :-)

 

main.tf (the manifest file where Terraform ready the desired state of all the resources):

provider "openstack" {
  user_name  = "${var.user_name}"
  tenant_name = "${var.tenant_name}"
  password  = "${var.password}"
  auth_url  = "${var.auth_url}"
}

resource "openstack_networking_network_v2" "terraform" {
  name = "terraform"
  region = "${var.region}"
  admin_state_up = "true"
}

resource "openstack_compute_keypair_v2" "terraform" {
  name = "SSH keypair for Terraform instances"
  region = "${var.region}"
  public_key = "${file("${var.ssh_key_file}.pub")}"
}

resource "openstack_networking_subnet_v2" "terraform" {
  name = "terraform"
  region = "${var.region}"
  network_id = "${openstack_networking_network_v2.terraform.id}"
  cidr = "192.168.50.0/24"
  ip_version = 4
  enable_dhcp = "true"
  dns_nameservers = ["208.67.222.222","208.67.220.220"]
}

resource "openstack_networking_router_v2" "terraform" {
  name = "terraform"
  region = "${var.region}"
  admin_state_up = "true"
  external_gateway = "${var.external_gateway}"
}

resource "openstack_networking_router_interface_v2" "terraform" {
  region = "${var.region}"
  router_id = "${openstack_networking_router_v2.terraform.id}"
  subnet_id = "${openstack_networking_subnet_v2.terraform.id}"
}

resource "openstack_compute_secgroup_v2" "terraform" {
  name = "terraform"
  region = "${var.region}"
  description = "Security group for the Terraform instances"
  rule {
    from_port = 1
    to_port = 65535
    ip_protocol = "tcp"
    cidr = "0.0.0.0/0"
  }
  rule {
    from_port = 1
    to_port = 65535
    ip_protocol = "udp"
    cidr = "0.0.0.0/0"
  }
  rule {
    ip_protocol = "icmp"
    from_port = "-1"
    to_port = "-1"
    cidr = "0.0.0.0/0"
  }
}

resource "openstack_compute_instance_v2" "terraform" {
  name = "terraform"
  region = "${var.region}"
  image_name = "${var.image}"
  flavor_name = "${var.flavor}"
  key_pair = "${openstack_compute_keypair_v2.terraform.name}"
  security_groups = [ "${openstack_compute_secgroup_v2.terraform.name}" ]
  floating_ip = "184.94.252.189"

  network {
    uuid = "${openstack_networking_network_v2.terraform.id}"
  }

  provisioner "remote-exec" {
    script = "deploy.sh"
    connection {
      user = "${var.ssh_user_name}"
      key_file = "${var.ssh_key_file}"
    }
  }
}


To make it simple, for this blog post I replaced the part that deploys Devstack with a simpler setup of a web server (Apache).

deploy.sh (Terraform will copy and execute it on the remote instance as soon as it is created):

#!/bin/bash

# author: Joe Topjian (@jtopjian)
# source: https://gist.github.com/jtopjian/4ffc82bfcbbcc78d07e4

sudo apt-get update
sudo apt-get install -y -f apache2


The goal is to demonstrate how easy it is to create a new software environment on a Cisco Metapod Openstack target from scratch and run it. 
The following pictures show the Metapod console before and after running the "terraform apply” command on my computer.

This is before I run the command:

The Openstack console from Cisco Metapod: view of the tenant networks
The Openstack console from Cisco Metapod

And this is the expected result (network and server infrastructure created, apache installed):

Terraform has created all the required resources in Openstack
Resources created in Openstack

Next video (the most important part of this post) is a recorded demonstration of the creation of the new Apache server: you can see the launch of the “terraform apply” command that, after reading the manifest file, creates a network, a subnet, a router with two interfaces, a floating ip and a instance on Openstack. Then the Apache web server is downloaded and installed in the new instance.
The Metapod console is left in the background and you see the Openstack objects pop up as they are created.
Finally the home page of the new web server is tested.

 


 


Conclusion

It is very easy to get rid of the delays, the misunderstandings and the inefficiency of many current IT organizations.
If you standardize the process that developers follow to obtain the environment for a new project - in all the phases of the life cycle - you can enable a faster go to market for new business initiatives making your customers happy.
It would be a first step towards DevOps (more is required, mostly in changing the culture of both developers and people in operations).

Infrastructure as code is a brilliant way to create the needed infrastructure on demand (and release it when no longer needed), to maintain it based on blueprints and manage the definition of the infrastructure with the same tools you use for the application source code: a text editor (or your preferred IDE), a version control system, an automation tool.

If you have a IaaS platform like Openstack, provisioning of both virtual and physical resources is made easy.
If you do a further step forward with a managed service, someone will grant that your Openstack is correctly configured for production, up to date and in perfect health. You enjoy all the benefits, without the hassle of setting it up and operating it daily.




References:
Configuration to run acceptance tests for Terraform/OpenStack - https://github.com/berendt/terraform-configurations
Why don't you try Openstack (without getting your hands dirty)? - http://lucarelandini.blogspot.it/2016/06/why-dont-you-try-openstack.html



June 14, 2016

Why don't you try Openstack (without getting your hands dirty)?


Is Openstack ready?
But, more important: are you ready for Openstack? 

 

are you ready for Openstack?


Openstack is mature (but complex).

Surveys and statistics show that Openstack is mature and provides a number of benefit to a broad spectrum of users, from small to large enterprises and service providers.  
Almost every professional in the IT (including CIOs and CTOs) knows the advantage that Openstack would offer to his organization.
But many are also aware of the complexity of the technology, the need for new operational processes and skills to set up and operate Openstack.
A scalable and reliable production environment is different from a lab where you explore the capabilities of the new platform.
The journey to a mature adoption of Openstack is not easy and you need to invest time and money.
In addition, when you hire people (or train yours), there is a possibility that another company steals them with the offer of a better salary, given the scarcity on the market.

So, many IT organizations - excluding cloud service providers, because that’s exactly their business - started wondering if it’s worth spending time in running the infrastructure, rather than running their business applications.
If you are not a cloud provider, that makes money selling IaaS, why should you dedicate additional effort to installation, monitoring, troubleshooting and release upgrades to ensure reliability and performances to your applications (that’s the only asset you should care of, because your business relies on them)?

Focus on your real business.

Why don’t you delegate all the responsibility to a provider, signing a contract that puts the above tasks and SLA on them?
Doing so, you would be free to use Openstack, getting all the benefit that you expect from it, without the burden of the learning curve and the organization implied by the Openstack adoption.
You would focus on using the infrastructure to develop and run your applications, no longer on running the infrastructure itself.

delegate the responsability of the service to a specialized provider


That is called a managed service.

You own the infrastructure and exploit the value of your Data Center assets (you don’t just drop them to escape to a public cloud).
An expert team (it’s just their business) installs Openstack in your DC and operates it everyday in a HA (high availability) configuration, granting 99.99% uptime.
They take care of all the version upgrades and the compatibility of all the new features released by the community by using a certified configuration.
The user interface (the Horizon console, the Openstack API and command line interface) is available to you so you can deploy virtual server instances, networks, storage at will. You get complete and granular reporting on the health of the system and its performances.
You are the owner, but you don't get your hands dirty with the complex stuff   :-)
You pay them for the service, they grant you the SLA.

Just taste if you like Openstack.

The approach described above can be a strategical decision, because you want to focus on your business applications.
But you could also use this trick to stand up a Openstack environment in very short time, test it (I mean if your organization adapts to it, if your applications run well, if the operational model - IaaS at home, on your infrastructure, no cloud provider lock in - is good for you, if your developers are more productive) for a while, e.g. 3 or 6 months, and finally decide if you want to adopt it. 
At that time you can choose between continuing with the managed service or doing it yourself.
It is a zero risk trial of the technology and of the processes: if you don’t like, you haven’t wasted any time and effort to stand it up so you can happily retreat.
You simply do not renew the service contract and that’s all: you have made a real informed decision about the adoption of Openstack.


no provider lock in for your cloud



Cisco Metapod: Openstack as a managed service.

Cisco has a offer that allows you to do what I described above, that comes from the acquisition of a company whose business was exactly Openstack as a managed service, on your premises.
They had a Openstack distribution of their own, optimized and hardened to provide a smooth and effective service.
Now, thanks to a strong partnership with Red Hat, the team is using the Red Hat Enterprise Linux Openstack distribution (OSP8, based on Liberty).

The essential features of this service are:
- easy start: entry level contract for 90 days
- ready to go live in 2-3 weeks from the engagement
- HA included
- the infrastructure to run Openstack can either be yours or provided by Cisco
- both the Openstack API and the AWS API are exposed by the system

And the infrastructure to run it in production can be as simple as this:


the servers and the switches you need to run Openstack


The value you can get from it: a well defined SLA, installation included, maintenance and upgrade included, no cloud provider lock in.

advantages of the Cisco Openstack managed service: Metapod


I believe that Cisco Metapod is a very good option to start with Openstack.
You can put your foot in the water to test the temperature, then decide to take a bath if you like it.

you can decide if you like Openstack without investing in a big project


References

Openstack users survey 
Cisco Metapod official page
Cisco and Openstack on this blog 

May 10, 2016

A simpler framework for hybrid cloud

Hybrid cloud is one of top mind projects for most IT managers, and there's little content that one can add to be original   ;-)

The hype and the attempt of many vendors (including... Cisco) to provide relevant solutions have populated the space of an incredible number of offers that make it hard to distinguish what works, what's manageable and cost effective, from what is only marketecture.




Recently Cisco decided to invest even more on cloud and, with the advent of a new CTO and some acquisitions, a revision of our approach to hybrid cloud made it easier and more effective. This post is not from official marketing and is not echoing company's direction: it's my attempt to rationalize my understanding of the new framework and to solicit your comments and feedback, so that I can leverage it when I discuss with my customers and partners.
The following picture represents the area where Cisco plays a role, offering hardware and software solutions.
When it comes to the software stack to manage the infrastructure and provide services to the users, we have a mix of Cisco products, open source solutions and integration with 3rd parties. The objective is to offer a set of pre-validated stacks that can match the different needs, granting a deterministic result.



I shared some thoughts with a group of colleagues because we're planning educational activities for our field people: instead of just providing a reference architecture (that would end being a list of products to be forced in every deal) we tried to represent the functions in the system as components of a framework, from which we'll pull the specific architecture for a given project. This, used cum grano salis, should help to be pragmatic and realize quick wins (for both the customers - think of Fast IT initiatives - and of course for Cisco).

As a result, next picture is separating the different functional layers so that they can be explained to sales guys and to customers.
It could also help to manage the possible overlap with alternative solutions that customers may choose – or already have – because every element is replaceable in the picture, based on the open API they expose/consume (as well as any well designed 3rd party product).

It is important to note that the top two layers in the picture are optional, since not all customers need those functions in their system. Based on the level of Governance that they want to have, the existing processes and the way they develop business applications (or use commercial software that only need a resource pool to be deployed), the entry point could be directly at the third layer (Multi-Cloud Management) and ITSM and PaaS would be removed.




So, while we explain all the possibilities as said above, we need to make them feel confident that it’s doable and not overly complex.
In that regard, my motto is that “cloud is not a product (or a set of), it’s a project and it’s complex in nature… regardless the products set you choose”. Generally the cost of hardware and software products is lower than development and consulting services, and customers know it.
If we can claim that a pre-built integration makes the project easier (and we can), I would stress the value of reducing the project risk and delivering outcomes faster rather than a cheaper implementation.

Selling licenses can be (almost) easy, but driving adoption with business outcomes for customers is different. Finally Cisco has built a practice that can deliver IT projects effectively and recruited partners that do the same: customers have different options to choose from.

Now, in the context of a end to end strategy defined with the customer, we can deliver projects based on agile methodologies (e.g. Scrum) and implement the architecture layers with a bottom up approach: from a strong capability to automate the Data Center (and the hybrid cloud) you can create services that are surfaced up to the consumption layers, including a self service catalog.


Software Defined What?

The bottom up approach stresses the value of the API exposed by UCS and ACI (with the further evolution from basic programmability to policy-based management, that I'm not mentioning yet - look out for next post). With the power and the granularity of those API, you can really realize a fully Software Defined Data Center (SDDC): servers and networks can be shaped via software interfaces.
By the way, I take the opportunity here to clarify that Software Defined Data Center does not mean Software Implemented Data Center: you don't necessarily need a software overlay that mimics the behavior of the hardware (living as a separate entity), you need software controllers that drive the shape and the behavior of both physical and virtual resources in the DC as a single system.
Better if they do that based on policies... like the Cisco architecture does  :-)
You will see a post dedicated to policies and application intent soon on this blog.



Competition?

We also recognize that many customers have already an ITSM solution in place, or any other form of governance. So we don't engage in competitive fights, like imposing Cisco Prime Service Catalog vs Service Now, but we rather integrate our solution with the existing components: this is a sort of compromise with a competitor that hurts my pride, but since it's for our customers' benefit... it's a good solution.

Cisco Cloud Center as a broker: the recent acquisition of Cliqr brings a great solution to Cisco to address the multi-cloud management use cases, the most important ones for the majority of customers. In the logical schema above you can see that the hybrid cloud scenario has been qualified better as Multi-Cloud management.
This reflects the fact that having a application deployed partly in your Data Center and partly in the public cloud is still a relevant use case, but many companies are more attracted by other scenarios... like moving from one project stage to next (e.g. Dev-Test-QA-Prod) using different resource pools (on premise or in cloud), or moving their assets from one cloud provider to a different one.


Cloud Brokering and Multi Cloud Management

In the first one (promotion to next stage) it could be useful to leverage resources that are allocated based on business convenience (e.g. cost or flexibility) or compliance (e.g. data sovereignty), so the application and all the needed infrastructure are moved back and forth to the public cloud.
In the second the driver could be a dual provider strategy, or maybe a change in the market conditions that makes one provider more appealing than the current one, or a strategic switch from private cloud to public (or vice versa).


In all these cases, we offer a solution to deploy a software stack (a complete custom application, a development platform, or a commercial software product) as a self service option, where the target can be selected dynamically from a list of available clouds.
You can deploy to your local private cloud, based on vmware or any other virtualization solution, or to a Openstack based cloud, or to any of the public cloud providers if you have an account there.
Any resource pool is a possible destination for the deployment (and the life cycle management, including autoscale or retirement of the application).
The model of the deployment of the application is completely de-coupled from the selection of the target, thanks to the capabilities of the orchestrator that can configure the needed resources in almost any cloud transparently.
It uses the API exposed by the element managers of a multi vendor infrastructure on premise (e.g. vcenter, UCS Manager, the ACI controller, etc.) and those exposed by public clouds like AWS, Azure, etc.



From a logical schema to a real deployment

So we can offer users a different entry point, based on their business needs (they might need a ticketing system, or a self service catalog, a PaaS solution or directly the web portal of the multi cloud manager to model deployments and deliver them).
The customer can have one or more resource pools, allocated wherever he likes (local or in cloud), and let the broker direct the selection of the target based on predefined policies.

The schema in next picture presents different products at every layer: a solution can be based on one of them, or a combination. We have the flexibility to match the specific needs with products from Cisco, from 3rd party vendors or open source.
As an example, MANTL is a new open source project that makes the development of microservices easier if you build cloud native applications.




I will expand the detail of the single products and the open source solutions shown in this picture in my next post.
Stay tuned...


References

http://www.cisco.com/c/en/us/solutions/executive-perspectives/fast_it.html
http://www.cisco.com/web/solutions/trends/futureofit/why-cisco.html
http://MANTL.io
http://Github.com/CiscoCloud/microservices-infrastucture 
http://lucarelandini.blogspot.it/2015/10/devops-docker-and-cisco-aci-part-1.html
http://lucarelandini.blogspot.it/2015/03/aci-for-dummies.html
http://lucarelandini.blogspot.it/2015/09/the-phoenix-project-how-devops-can.html




May 4, 2015

Openstack and Cisco

Cisco is investing a lot in Openstack, as other vendors do these days.
Initiatives include being a Gold member of the Openstack Foundation, being in the board of directors, contribute to different projects in Openstack (mainly Neutron, that manages networking, but also Nova and Ironic) with blueprints and code development.

Cisco also uses Openstack in his own data centers, to provide cloud services to the internal IT (our private cloud) and to customers and partners (the Cisco Cloud Services in the Intercloud ecosystem). We also have a managed private cloud offer based on Openstack (formerly named Metacloud).


Based on this experience, a CVD (Cisco Validated Design) has been published to allow customers to deploy the Openstack platform on the Cisco servers and network. The prescriptive documentation guides you to install and configure the hardware and the software in such a way that you get the expected results in terms of scale and security. It's been fully tested and validated in partnership with Red Hat.

Another important point is the offer of the Cisco ACI data model to the open source community. The adoption of such a model in Openstack (the GBP, i.e. the Group Based Policy) is a great satisfaction for us.

Openstack will also be managed by the Stack Designer in Cisco Prime Service Catalog (PSC 11.0), to create PaaS services based on Heat (similarly to what we do now with Stack Designer + UCS Director). Templates to deploy a given Data Center topology will be added as services in the catalog and, based on them, other services could be offered with the deployment of a software stack on top of the Openstack IaaS. The user will be able to order, in a single request, the end to end deployment of a new application.

 

In this post I will tell you about the main topics in the Cisco-Openstack relationship:

1 - Available Plugins for Cisco products (Nexus switches, UCS servers, ACI, CSR, ASR)
2 - GBP: Group Based Policy (the ACI model adopted by the Openstack community)



Available Plugins for Cisco products

Plugins exist for the following projects in Openstack: Neutron, Nova, Ironic.

You can leverage the features of the Cisco products while you maintain the usual operations with Openstack: the integration of the underlying infrastructure is transparent for the user.

 

Networking - Project Neutron

Plugins for all the Nexus switching family      
 - Tenant network creation is based on VLAN or VXLAN
Plugins for ACI      
 - Neutron Networks and Routers are created as usual and the plugin has the role to integrate the API exposed by the Cisco APIC controller

A number of Neutron plugins are available already: Nexus 1000v, 3000, 5000, 6000, 7000 and 9000 Series Switches are supported (see http://www.cisco.com/c/en/us/products/collateral/switches/nexus-3000-series-switches/data_sheet_c78-727737.html).

You can also scale the OpenStack L3 services using the Cisco ASR1K platform (see http://blogs.cisco.com/datacenter/scaling-openstack-l3-using-cisco-asr1k-platform#more-163906) and use the Cloud Services Router (CSR) for Openstack VPN as a Service (see Neutron blueprints web site for Kilo and http://specs.openstack.org/openstack/neutron-specs/specs/kilo/cisco-vpnaas-and-router-integration.html).


Network Service Plug-in Architecture (ML2)

This pluggable architecture has been designed to allow for common API, rapid innovation and vendor differentiation:




Based on the delegation of the real networking service to the underlying infrastructure, the Openstack user does not care what networking devices are used: he only knows what service he needs, and he gets exactly that.


Use the existing Neutron API with APIC and Cisco ACI   

When the Openstack user creates the usual constructs (Networks, Subnets, Routers) via Horizon or the Neutron API, the APIC ML2 plugin intercepts the request and send commands to the APIC API.
Network profiles, made of End Point Groups and Contracts, are created and pushed to the fabric. Virtual networks created in the OVS virtual switch in KVM are matched to the networks in the physical fabric, so that traffic can flow to and from the external world.



Another plugin is the one for the Cisco UCS servers, leveraging the UCS Manager API.
This integration allows you to leverage the single point of management of a UCS domain (up to 160 servers) instead of configuring networking on the single blades or - as in competing server architectures - on the individual switches in the chassis.

An additional advantage offered by UCS servers is the VM-FEX (VM fabric extender) feature: virtual NICs can be offered to the VM directly from the hw, bypassing the virtual switch in the hypervisor thanks to SR-IOV and gaining performances and centralization of the management. 


Next picture shows the automated VLAN and VM-FEX Support offered by the Cisco UCS Manager plugin for OpenStack Neutron:



Bare metal deployment - Project Ironic  

Plugin for UCS Manager to deploy Service Profiles for bare metal workloads on the UCS blades

Ironic is the OpenStack service which provides the capability to provision bare metal servers. The initial version of Ironic pxe_cisco driver adds support to manage power operations of Cisco UCS B/C series servers that are UCSM managed and provides vendor_passthru APIs.
User can control the power operations using pxe_cisco driver. This doesn’t require IPMI protocol to be enabled on the servers as the operations are controlled via Service Profiles.

The vendor_passthru APIs allows the user to enroll the nodes automatically to Ironic DB. Also provides APIs to get the Node specific information like, Inventory, Faults, Location, Firmware Version etc.
Code is available in GitHub @ https://github.com/CiscoUcs/Ironic-UCS


GBP: Group Based Policy


The most exciting news is the adoption of the GBP (Group Based Policy) model and API in Neutron, that derives from the way the Cisco APIC controller manages end point groups and contracts in the ACI architecture. A powerful demonstration of the Cisco thought leadership in networking.

The Group Based Policy (GBP) extension introduces a declarative policy driven framework for networking in OpenStack. The GBP abstractions allow application administrators to express their networking requirements using group and policy abstractions, with the specifics of policy enforcement and implementation left to the underlying policy driver. This facilitates clear separation of concerns between the application and the infrastructure administrator.


Two Options for the OpenStack Neutron API


The Neutron user can now select the preferred option between two choices: the usual building blocks in Neutron (Network, Subnet, Router) and the new - optional - building blocks offered by GBP.


 



In addition to support for the OpenStack Neutron Modular Layer 2 (ML2) interface, Cisco APIC supports integration with OpenStack using Group-Based Policy (GBP). GBP was created by OpenStack developers to offer declarative abstractions for achieving scalable, intent-based infrastructure automation within OpenStack. It supports a plug-in architecture connecting its policy API to a broad range of open source and vendor solutions, including APIC.
This means that other vendors could provide plugins for their infrastructure, to use with the GBP API.
While GBP is a northbound API for Openstack, the plugins are a southbound implementation.



In this case the Neutron plugin for the APIC controller has a easier task: instead of translating from the legacy constructs (Networks, Subnets, Routers) to the corresponding ACI constructs (EPG, Contracts), it will just resend (proxy) identical commands to APIC.




Read more about group-based policy at https://wiki.openstack.org/wiki/GroupBasedPolicy and the Cisco Application Policy Infrastructure Controller Driver for OpenStack Group-Based Policy Data Sheet

In few days, at the Openstack Summit in Vancouver, we'll see all the latest news about the Cisco contribution to Openstack. Don't miss it!

[Added on June 14, 2016]
You can read how easy is to start with Openstack in Why don't you try Openstack (without getting your hands dirty)?

Useful Links:

http://www.cisco.com/c/en/us/solutions/data-center-virtualization/openstack-at-cisco/index.html 
http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-733126.pdf
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/cisco-vpnaas-and-router-integration.html

GBP
https://www.openstack.org/summit/openstack-paris-summit-2014/session-videos/presentation/group-based-policy-extension-for-networking
http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/openstack-at-cisco/datasheet-c78-734181.html
https://www.rdoproject.org/Neutron_GBP




March 17, 2015

The Elastic Cloud project - Porting to UCSD

Porting to a new platform

This post shows how we did the porting of the Elastic Cloud project to a different platform.
The initial implementation was done on Cisco IAC (Intelligent Automation for Cloud) orchestrating Openstack, Cisco ACI (Application Centric Infrastructure) and 3 hypervisors.

Later we decided to implement the same use case (deploy a 3 tier application to 3 different hypervisors, using Openstack and ACI) with Cisco UCS Director, aka UCSD.

The objective was to offer another demonstration of flexibility and openness, targeting IT administrators rather than end users like we did in the first project.
You will find a brief description of UCS Director in the following paragraphs: essentially it is not used to abstract complexity, but to allow IT professionals to do their job faster and error-proof.
UCSD is also a key element in a new Cisco end-to-end architecture for cloud computing, named Cisco ONE Enterprise Cloud suite.

The implementation was supported by the Cisco dCloud team, the organization that provides excellent remote demo capabilities on a number of Cisco technologies. They offered me the lab environment to build the new demo and, in turn, the complete demo will be offered publicly as a self service environment on the dCloud platform.

The dCloud demo environment

Cisco dCloud provides Customers, Partners and Cisco Employees with a way to experience Cisco Solutions. From scripted, repeatable demos to fully customizable labs with complete administrative access, Cisco dCloud can work for you. Just login to dcloud.cisco.com with your Cisco account and you'll find all the available demo:


Cisco UCS Director

UCSD is a great tool for Data Center automation: it manages servers, network, storage and hypervisors, providing you a consistent view on physical and virtual resources in your DC.

Despite the name (that could associate it to Cisco UCS servers only) it integrates with a multi-vendor heterogeneous infrastructure, offering a single dashboard plus the automation engine (with a library containing 1300+ tasks) and the SDK to create your own adapters if needed.

UCSD offers open API so that you can run its workflows from the UCSD catalog or from a 3rd party tool (a portal, a orchestrator, a custom script).

There is a basic workflow editor, that we used to create the custom process integrating Openstack, ACI and all the hypervisors to implement our use case. We don't consider UCSD a full business level orchestrator because it's not meant to integrate also the BSS (Business Support Systems) in your company, but it does the automation of the DC infrastructure including Cisco and 3rd party technologies pretty well.

Implementing the service in UCS Director

Description of the process

The service consists in the deployment of the famous 3 tier application with a single click.
The first 2 tiers of the application (web and application servers and their networks) are deployed on Openstack. The first version of the demo uses KVM as the target hypervisor for both tiers, next version will replace one of the Openstack compute nodes with Hyper-V.
The 3rd tier (the database and its network) is deployed on ESXi.
On every hypervisor, virtual networks are created first. Then virtual machines are created and attached to the proper network.

To connect the virtual networks in their different virtualized environments we used Cisco ACI, creating policies through the API of the controller.
One End Point Group is created for each of the application tiers, Contracts are created to allow the traffic to flow from one tier to next one (and only there).
If you are not familiar with the ACI policy model, you can see my ACI for Dummies post.

All these operations are executed by a single workflow created in the UCSD automation engine.
We just dropped the tasks from the library to the workflow editor, provided input values for each task (from the output of previous tasks) and connected them in the right sequence drawing arrows.
The resulting workflow executes the same sequence of atomic actions that the administrator would do manually in the GUI, one by one.

The implementation was quite easy because we were porting an identical process created in Cisco IAC: the tool to implement the workflow is different, but the sequence and the content of the tasks is the same.

Integration out-of-the-box

Most of the tasks in our process are provided by the UCSD automation library: all the operations on ACI (through its APIC controller) and on ESXi VM and networks (through vCenter).




When you use these tasks, you can immediately see the effect in the target system.
As an example, this is the outcome of creating a Router in Openstack using UCSD: the two networks are connected in the hypervisor and the APIC plugin in Neutron talks immediately to Cisco ACI, creating the corresponding Contract between the two End Point Groups (please check the Router ID in Openstack and the Contract name in APIC).



 

Custom tasks

The integration with Openstack required us to build custom tasks, adding them to the library.
We created 15 new tasks, to call the API exposed by the Openstack subsystems: Neutron (to create the networks) and Nova (to create the VM instances).
The new tasks were written in Javascript, tested with the embedded interpreter, then added to the library.




After that, they were available in the automation library among the tasks provided by the product itself.
This is a very powerful demonstration of the flexibility and ease of use of UCSD.



I should add that the custom integration with Openstack was built for fun, and as a demonstration.
To implement the deployment of the tiers of the application to 3 different hypervisors we could use the native integration that UCSD has with KVM, Hyper-V and ESXi (through their managers).
There's no need to use Openstack as a mediation layer, as we did here.


The workflow editor

Here you can drag 'n drop the task, validate the workflow, run the process to test it and see the executed steps (with their log and all their input and output values).









Amount of effort

The main activities in building this demo are two:
- creating the custom tasks to integrate Openstack
- creating the process to automate the sequence of atomic tasks.

The first activity (skills required: Javascript programming and understanding of the Openstack API) took 1 hour per task: a total of 2 days.
Jose, who created the custom tasks, has also published a generic custom task to execute REST API calls from UCSD: https://github.com/erjosito/stuff/blob/master/UCSD_REST_custom_tasks.wfdx
In addition, he suggests a simple method to understand what REST call corresponds to a Openstack CLI command.
If you use the  --debug option in the Openstack CLI you will see that immediately.

As an example, to boot a new instance:
nova --debug boot --image cirros-0.3.1-x86_64-uec --flavor m1.tiny --nic net-id=f85eb42a-251b-4a75-ba90-723f99dbd00f vm002


The second activity (create the process, test it step by step, expose it in the catalog and run it end to end) took 3 sessions of 2 hours each.
This was made easier by the experience we matured during the implementation of the Elastic Cloud Project. We knew already the atomic actions we needed to perform, their sequence and the input/output parameter for each action.
If we had to build everything from scratch, I would add 2-3 days to understand the use case.


Demo available on dCloud

The demo will be published on the Cisco dCloud site soon for your consumption.
There are also a number of demonstrations available already, focused on UCS Director.
You can learn how UCSD manages the Data Center infrastructure, how it drives the APIC controller in the ACI architecture, and how it is leveraged by Cisco IAC when it uses the REST API exposed by UCSD.

Acknowledgement

A lot of thanks to Simon Richards and Manuel Garcia Sanes from Cisco dCloud, to Russ Whitear from my same team and to Jose Moreno from the Cisco INSBU (Insieme Business Unit).
Great people that focus on Data Center orchestration and many other technologies at Cisco!

You can also find a powerful, yet easy demonstration of how UCSD workflows can be called from a client (a front end portal, another orchestrator...) at Invoking UCS Director Workflows via the Northbound REST API