Showing posts with label autoscaling. Show all posts
Showing posts with label autoscaling. Show all posts

July 19, 2019

Just one button to provision a production-grade Kubernetes cluster

(this is a guest post, authored by my esteemed colleague Fabio Di Niro)

Do you remember?


I bet all of you who are working or playing with Kubernetes remember perfectly the first time you tried to install it.
And the second.
And the third.
...
And the one that finally worked out.

And if you’re a professional you remember also the long path that brought you to own the expertise on Kubernetes that you need to install and fine-tune production grade clusters.
Or, if you’re not a Kubernetes professional, you probably remember how much time it took for you to find someone able to perform a valid Kubernetes install...and how much it costed.

To save all this time and effort to our customers Cisco released the Cisco Container Platform (CCP), a turnkey solution to easily provision production-grade Kubernetes clusters on-prem or in the cloud in minutes, with few mouse clicks and requiring little to no knowledge of K8s.
All the needed integrations with network, storage, computing and security are done automatically by CCP so that the provisioned K8s clusters are ready to run in production.
Clusters provisioned by CCP are already equipped with finely configured monitoring and logging tools like FluentD, Grafana, ElasticSearch, Kibana.
Through the Container Network Interface (CNI) you can choose whether to leverage Cisco ACI as network infrastructure or Calico (no dependence on the underlying infrastructure).

This is already great, but I thought to create a demo that may push the simplicity of those “few mouse clicks” to its limit, making possible to create a production grade cluster in just one click.







Introducing the Kubernetes dash button.

The concept is fairly simple: build a dash button that, once pressed, creates a production grade Kubernetes cluster ready to use.

Leveraging the rich set of the Cisco Container Platform (CCP) APIs this is even too easy, so I thought to add some more feature on top:

- I wanted to provision the cluster and access it just through the dash button. So, I want CCP to display on the dash button itself the IP address of the master node of the cluster created
- The start and finish of the cluster provisioning process had to be confirmed, so the communications had to be bi-directional with the dash button
- I wanted a fair battery life that would avoid me to recharge the button every day, so I needed to have electronics able to sleep or hibernate
- My lab, where I have the infrastructure and the CCP, is behind a proxy, so I can’t listen for calls inside the lab, I can just initiate communications from the lab. So, I needed a way to change the “push” of the button in a “pull” of the button press information
- I wanted to use the button everywhere I go without worrying about the local Wi-Fi settings



How it works

To satisfy all the above requirements I added a couple of elements in the picture, ending up with the following architecture:



The button is based on an Arduino ESP 32 board, it connects via Wi-Fi to my smartphone and uses its internet connection, this way I can use the button everywhere my phone has data signal. The button leverages a publish-subscribe message service (MQTT) in the cloud to bypass the limitation of the proxy I have behind my lab and reach a couple of scripts that calls the right API in the Cisco Container Platform to trigger the provisioning of a shiny new Kubernetes cluster.
Once the cluster is provisioned the IP address of the master node is returned to the dash button that shows it on its display, at this point it is ready to accept connection and be used.

A 3D printed enclosure completed my project, I took an existing model but then I decided to  leverage the capabilities of CCP to deploy K8s clusters on-prem or in the cloud so I designed the two different enclosures you can see in the picture to have two different dash buttons for the two different deployment target.
All the code and 3D designs have been released and are publicly available at: https://github.com/fdiniro/CCPDashButton




Now, before doing my demo, I can ask to my customers: “How much time and effort takes you to install a production-grade, fully operationalized and secured kubernetes cluster?” and whatever answer I get I know I can answer “I can do it in 2 minutes blindfolded and cuffed”.

You can see the recorded demo here: https://youtu.be/-F-xR0XNPBs



March 27, 2018

Why do you run slow, fragile and useless applications and are still happy?


If you are not interested in the detail, at least browse the post and watch the amazing video recordings embedded   :-)

We have already discussed the value of automation in the deployment of software applications.
It is also clear that collecting telemetry data from systems and applications into analytic platforms enhances visibility and control, with an important return on your business.

Cisco offers best of breed solutions for both automation and analytics, but the biggest value is in their  integration end to end. Your applications can be deployed with Cisco CloudCenter and completely controlled with AppDynamics and Tetration, with no manual intervention.

Why do you need that visibility?


Thanks to the information provided by AppDynamics, you have immediate visibility of the performances and the dependency map of your software applications. Tetration exposes its compliance and the performances from a system standpoint.
CloudCenter offers your users a self service catalog where applications can be selected for deployment in one of the clouds you have configured as a target: but the smartest feature is that every deployment can also install the sensors for AppDynamics and Tetration automatically in each node of the application topology, so that telemetry data start to be collected immediately.
why we need to collect information at runtime
why we need to collect information at runtime


With that, now you have no excuse to keep running applications that do not perform well enough, that expose vulnerabilities and that produce limited business return due to poor customer satisfaction or inefficiency. The same applies to non compliant applications that break your security rules or your architectural standards, that were deployed some years ago and now are untouchable due to complexity and lack of documentation.

With such an easy integration of telemetry and the insight that you can get immediately, it makes no sense keep those monsters running in your datacenter or in your cloud. You can evolve them and remove the bottlenecks and security risks once identified.


analytics tools add value to the application telemetry
Analytic tools add value to the application telemetry



We want to demonstrate how easy it is.

This post is a follow up to the demonstration of the integration of Cisco CloudCenter with Tetration: we extended the demo with the addition of AppDynamics, so that our applications are now completely under control when it comes to security, compliance, performances and business impact.

Architectural Overview

 

We used a well known application as an example: WordPress is an open source tool for website creation, written in php.  It uses a common LAMP stack: Apache + php + mySQL, running on Linux.
Wordpress is a two tier application, so you generally deploy two VM to run it: the front end is an Apache web server with the php application, the back end runs the database (mySQL).

We want that each tier is monitored, as a default, by both AppDynamics and Tetration. This must happen without introducing any complexity for the user that orders Wordpress from the self service catalog and must work in any target cloud. Based on the administrator preference, the user could even be unaware of the monitoring setup.



AppDynamics and Tetration integration with CloudCenter
Overview of the architecture



Next paragraphs describe the architecture of AppDynamics and of Tetration, so that you understand what integration we built to make CloudCenter inject telemetry sensors in each deployment. 
Then the process triggered when a user deploys Wordpress from the CloudCenter catalog is explained.
Detailed video recording of all the steps is also provided. 


AppDynamics: Architecture of the system


AppDynamics uses agents to collect information from the running servers and to send them to the controller. Agents are specific for the runtime of various programming languages, but there are also agents that interact only with the operating systems: you choose the type of agent that best fits each node. Also databases and their transactions can be monitored.
The Controller is where users go to view, understand, and analyze that data sent by agents.
Agents send data about usage metrics, code exceptions, error conditions and calls to backend systems to the Controller.

AppDynamics overview
AppDynamics overview


CloudCenter: the Application Profile

 

Next picture shows the Application Profile of the Wordpress service that we have created in CloudCenter. Each VM in the two tiers will contain the application and the required sensors for AppDynamics and Tetration. 

The Tetration injector component is an ephemeral Docker container that is used by CloudCenter just to invoke the API exposed by the Tetration cluster, so that the telemetry data are welcome when they arrive and associated to the scope of the Wordpress deployment. It disappears when the deployment is completed.

topology of the application deployment, showing the sensors applied
Topology of the application deployment, showing the sensors applied


As for any other application, the integration is implemented using custom scripts to deploy the agents for AppDynamics and Tetration
All application artifacts, scripts and services are stored in a Repository, and pulled by the CloudCenter agent running in each VM.
CloudCenter executes our scripts during different stages of the deployment, to add the AppDynamics Agent and the Tetration Sensor (using the same technique you could add any other agent that you use for backup, monitoring, etc.).
This is a video (2 min) showing how the Application Profile for Wordpress is built in CloudCenter:




CloudCenter integration with AppDynamics


The green boxes in next picture show the sequence of actions executed by the CloudCenter agent to deploy the AppDynamics php agent in the frontend VM: the same actions that the administrator would do manually.

installing and configuring AppDynamics agents
Installing and configuring AppDynamics agents


For your reference we used a shell script with placeholders, where configuration parameters are replaced by CloudCenter dynamically as listed below:

AGENT_LOCATION=“http://cc-repo.rmlab.local”
APPD_CONTROLLER="appd.rmlab.local”
APPD_CONTROLLER_PORT="8090”
APPD_ACCESS_KEY="a4abcdc7-ce1c-41cb-[cut]”
APPD_ACCOUNT_NAME="customer1”
APP_NAME="$parentJobName” ($parentJobName will be replaced by the value WPDEMO, that is the name given to the deployment by the user)
TIER_NAME="$cliqrAppTierName” ($cliqrAppTierName will be replaced by the value WSERVER, that is how the tier is identified in the Application Profile)
HOST_NAME="$cliqrNodeHostname" ($cliqrNodeHostname will be replaced by the value C3-b2a9-WPDEMO-WSER, generated by CloudCenter when it provisions the VM)

This video (2 min) shows how the existing Application Profile is updated adding the deployment of the AppDynamics agent:




Tetration: Architecture of the system


Tetration is a ready-to-use big data platform which runs a Hadoop cluster on its core.
As described in a previous post, Tetration collects telemetry streamed by software and hardware sensors. It stores metadata within the data lake and runs machine learning algorithms to provide business outcomes
Tetration sensors, downloaded from the cluster itself, embed the required configuration and don’t need any user input. As soon as they are installed, they start to stream rich telemetry and can optionally control local workload policy enforcement


Tetration overview
Tetration overview




CloudCenter integration with Tetration


At deployment time, a dropdown list allows the user to select one of the two types of sensor: Deep Visibility, or Deep Visibility with Enforcement (of security policies).

The telemetry data for this application are segregated under a specific scope, created by CloudCenter during the provision phase using the variable $parentJobName (containing the value WPDEMO in our demonstration).
The sensor are installed in each VM via a custom script, as described by next picture:


installing and configuring Tetration agents
installing and configuring Tetration agents




application VM with all the agents installed
Wordpress VM with all the agents installed

Next video (6 min) shows how a service (Tetration Injector) is created and then added to the existing Application Profile:



Result of the deployment seen in CloudCenter, Tetration and AppDynamics


This video shows the deployment of the Wordpress application from the CloudCenter self-service catalog.


And next video shows the analysis of telemetry data in Tetration, when the Wordpress application is deployed:


Finally, we look at AppDynamics to see the analysis of the behavior of the application from a business standpoint:


Summary 

Only Cisco can offer automated end-end, real-time application intelligence giving you 360 of visibility at business and network side Do you want to run this demo in your lab? Engage with us to setup a Lab.
All source code, the CloudCenter Services and the Application Profiles are available on github.

Credits and Disclaimer

This post describes a lab activity that was implemented by two colleagues of mine, Riccardo Tortorici and Stefano Gioia.
We created a demonstration lab to show our customers how easy it is to integrate the three products.
This is not the official documentation from Cisco about the integration, that will be released soon. 

References

Previous post on the integration of CloudCenter with Tetration: https://lucarelandini.blogspot.it/2017/10/turn-lights-on-in-your-automated.html



January 22, 2017

Hybrid Cloud and your applications lifecycle: 7 lessons learned


Hybrid Cloud is a must nowadays, I will not spend a word to convince you (you’d not be reading this post if you didn’t believe it). This is the story of a real project.

This post provides more context about the story I summarized at Just 1 step to deploy your applications in the cloud(s).
The structure of the post is:
  • Motivation
  • Use Cases
  • Time
  • Software Stack
  • Benefit of the architecture we implemented
  • Lessons Learned (the most important part)




Motivation for hybrid cloud, and most of the work in my customers' projects, include the following areas:
- Cost control (there is a strong debate: some swear it’s cheaper, others have discovered hidden costs: e.g. network traffic in production, after they made a business case just on the cost of VM provisioning).
- Governance model (IT must find a way to maintain control over resources usage, design patterns, compliance and security when application developers chose private cloud or public cloud).
- Mature technical solution: architecture and technology (there are many good products and system integrators in the market)

But, once you have made a decision, what will you run in the hybrid cloud?
Will your applications be spread across the boundary of your datacenter (one tier inside, other tiers outside)?
Or can we say that it is rather a multi-cloud deployment, where you have a number of resource pools that you can use as a target for deployments?

This project was made by a large corporation, to test how a hybrid cloud can be built and operated and to verify the impact on their current organization.
It is not a full production environment, it’s a pilot project that demonstrated on a small scale how easily you can build a software defined fully automated data center, including both resource pools from your local data center(s) and from public cloud providers.

The solution is expected to be cost effective, of course, but the greater benefits come from business agility and consistent governance.


Use Cases:

The evaluation was focused on 3 main use cases, all requiring that end users order the deployment of a complex software stack from a service catalog: the target for the deployment can be either the private cloud or the public cloud, or a combination of the two. These are the areas where the implementation demonstrated the value of the multi-cloud solution:
  • Business Intelligence (self-deployment of R Studio and additional tools)
  • ETL (self-deployment of a common software for ETL that data scientists would use in autonomy)
  • High Performance Computing (HPC) on OpenStack, with the integration of a DevOps pipeline.

Subject matter experts were provided from different lines of business in the company to support the implementation activities and evaluate the result. 
The use cases represent some frequent activities that the company needs in their usual business, especially in R&D. Improving efficiency and quality in the associated processes will have an impact on the overall business outcome. Applications were selected for the self service catalog that are deployed frequently (every week) and whose installation process takes time (some man days, accounting for both infrastructure and software setup), delaying business objectives.

Time:

All the activities in the project were delivered in time (six weeks), including the setup of the hardware and software systems for the hybrid cloud, the implementation of the 3 main use cases and some additional use cases, the functional tests and the stress tests. This is a demonstration that a proper selection of the technology and a good organization of the project allow for immediate return.
Challenges like setup of the remote access to the lab for remote experts, constraints in the networking and security configuration in the lab, some missing information about the process to install the applications (essential to build the model for the automation) slowed down the implementation. See Lessons learned.

Software Stack:

This is a complete end to end solution: its adoption will happen with a phased approach, starting from the components that grant an easy and immediate impact on the most critical business requirements and adopting some non-functional components later to complete the architecture. The extension from private cloud (based on any combination of VMware, other hypervisors and OpenStack) to a hybrid cloud (integrating AWS, Azure and more) was very quick (it is just a matter of configuration and definition of the governance model). Checkmarks in the picture show what we realized in the short timeframe of the project. The rest is part of a phased plan. The blue boxes show the components provided by Cisco.


a full solution for the hybrid cloud

The fundamental component in this architecture is Cisco CloudCenter (CCC), that has 2 main roles: 
- providing an orchestration solution that offers users the possibility to self-deploy complex software stacks from blueprints offered in a catalog, 
- brokering cloud resources from both private and public clouds (in the project we integrated VMware, OpenStack and AWS, but more clouds are supported).
CloudCenter manages the lifecycle of software applications in the cloud (at a level of abstraction where the underlying physical infrastructure does not matter).
The OpenStack use cases for HPC are supported by a Cisco Validated Design named UCSO: it includes a reference architecture for running the Red Hat OSP8 distribution on a certified hardware platform made of Cisco UCS servers and Nexus 9000 switches. The setup process and the operations are defined by the official deployment guide and Cisco's technical support assumes responsibility on the entire stack, including the Red Hat software.
The management of the entire DC infrastructure from a single orchestration platform was made possible by Cisco UCSD (UCS Director): a single dashboard and workflow engine to manage servers, network and storage, both physical and virtual. The status, the performances and the remaining capacity of all the systems were monitored with Cisco UCSPM (UCS Performance Manager).


Benefits of the architecture we implemented

The implementation of the multi-cloud solution demonstrated the major benefits that a hybrid cloud delivers.
  • A consistent architecture based on software (and eventually hardware) components that integrate easily and satisfy all the business and technical requirements.
  • All components in the architecture are loosely coupled and their integration is based on standard protocols and documented open API. As a consequence, every component can be replaced by an alternative solution (from a different vendor, from the open source, from a custom build) with no fear of vendor lock in.
  • The adoption of a hybrid cloud solution can happen gradually, starting with a core implementation with the most critical components (e.g. CCC, ACI and UCSO), adding more features as a second step (infrastructure automation and monitoring) with UCSD and UCSPM, eventually a unified service catalog and ITSM portal later.

Lessons Learned:

  1. use cases
  2. network topology
  3. security and trust
  4. reusable work (repositories and services)
  5. engage SME and business owners
  6. document
  7. refine (iterations, devops)

Use Cases 
The selection of the use cases is important. You need a quick return to demonstrate the value of the hybrid cloud: the adoption of the hybrid model should address immediate business needs, that the end users can appreciate, rather than be driven just by an industry trend. 
IT projects should not start because a new technology is very smart, but because the outcome makes the business easier and more productive.
Always engage your end users in the planning phase and avoid academic use cases that have a limited appeal on the decision makers. In this project we were lucky because the preparation was done by the steering committee very well in advance.
Once the models for the automation were ready, we could test any combination of the deployment for the application tiers: everything in the private cloud, everything in the public cloud, or the front end deployed on one side and the back end on the other side. The benchmarking capabilities of the product (CCC) allowed to compare the price/performances ratio of the different options based on vSphere, Openstack and AWS - specifically for each application, with tailored reporting.
 
Network Topology
A hybrid environment connects - by definition - areas that were designed separately (your datacenter and the public cloud). They have security policies and configurations that are not meant to work together, and this makes it difficult. Before you start the setup, dedicate the right time to collect all the requirements and to design the connectivity properly. 
We had some issues with the network proxies and the firewalls because of the protocols and ports that we needed to open to allow a proper integration of the Cloud Management Platform (running on premise) with the orchestration engine (with one instance running in each cloud region used in the project, to leverage the local API exposed by the cloud provider and to manage the lifecycle of the applications in the cloud). 

communication among the components of Cisco CloudCenter

Another important requirement is to have a unique repository for all the artifacts, the blueprints and the installation packages for the applications: it should be reachable from all the target clouds that you plan to use, regardless its location (it can be either in the private or in the public cloud, but all the servers you deploy will access it to stand up a new instance of the application). 
The same applies to any public repository that is used in the setup of the applications (both commercial software and open source components, e.g. packages installed using yum).
See also CCC Components Overview for more detail.

Security and Trust
It's important that a good level of trust is established between the architects building the hybrid cloud and the operations team, especially the security guys. Special rules and new policies need to be setup to allow the new platform to work, it's impossible to keep the same old governance model that addresses a single end user identity. 
Sometimes I feel like I'm living - again - the same conflict that I had with Database Administrators, when I tried to configure JDBC database connection pools in the first Java application servers in the 80's. The system should be trusted, and a delegation of the decisions (authentication, authorization and audit) accepted.

Reusable work (repositories and services)
When you model a software application to automate its deployment, you should identify any building block that can potentially be reused in a different model. If you create a reusable (parametric) deliverable and save it individually in a common repository, next time you'll have the work ready to be reused.
This applies to architectural building blocks like database servers, web servers, load balancers, firewalls, distributed caches, etc. 
If they have been created as separate services, instead of just being a part of a monolithic model, they will appear in your designer's palette everytime you model a similar application and you can drag and drop them in the topology. We did that in the project and we saved a lot of time in the implementation. 

Engage SME and business owners
It is important that subject matter experts (SME) collaborate at the definition of blueprints and the build of the automation model. Even though documentation exists for the deployment of the application, you should work together. 
The user knows all the requirements, he knows how to verify and troubleshoot, he has encountered all the setup issues already.
I've learned that the best way to document the setup process for an application, so that you can use it as a reference for the automation, is to ask the SME to install it in front of you in a clean environment where the application was never run, and record a video of the process. It's faster than writing documentation, more complete and reliable. We did that using the desktop sharing feature in Webex and we recorded the sessions.  

Document
While you do the work, keep track of all the steps. Take (maybe informal) notes, but mostly take a lot of screen shots to document what you did. You can keep them on a wiki or on a shared folder, they will help a lot when you have time to create the formal documentation of the project. If you need to troubleshoot, eventually involving other people, this information will be unvaluable.
Of course, versioning and taking snapshots of all deliverables also helps in case you need to go back for whatever reason. 

Refine (iterations, devops)
Create the implementation for a minimum viable product (MVP) as soon as you can. Get the product (i.e. the entire self service catalog, or just the implementation of a single application blueprint) to early customers as soon as possible, to get their feedback before you go too deep in the implementation.
Applied to a hybrid cloud scenario, this will help to evaluate:
- quality of the service you are building, including documentation
- how much the users need it and use it in the real world
- performances of the distributed environment and any bottleneck (network, computing, configuration)
- security implications 
You will have all the time to make it perfect, through iterations that improve the implementation, collect feedback, allow for tuning the design and the configuration. No need to work in a hurry and make mistakes, while you keep your users waiting for the final "perfect" product but they don't see any progress.

October 12, 2016

Just 1 step to deploy your applications in the cloud(s)


As described in my previous post about Terraform, the deployment environment for a new application can be created "on demand" by configuring physical and virtual resources.
Good open source products allow to describe the desired state and to automate the setup of a target infrastructure.
They can also deploy your software application and configure it properly.

But is some use cases this is not enough.
You might want to offer your users - depending on their needs and their skills - a visual catalog in a web portal.
You might want to apply a governance model based on policies, use different clouds as possible targets for the deployment, offer a easy way to manage the life cycle of the deployment (start, stop, scale up/down, terminate) and get reports on usage of the resources.

If this is the case, there are good solutions available.
One of these is Cisco Cloud Center, a powerful tool that offers two main use cases: 
  • modeling the deployment of a software stack (creating a template or blueprint for common deployments) and 
  • brokering cloud services (different resource pools available from a single catalog).

A easy to consume (and manage) self service catalog
A easy to consume (and manage) self service catalog


Open Source or commercial products?


In the same project where I used Terraform to deploy Apache on Openstack, I also used Cisco Cloud Center to deploy a portal application on Openstack.
But at the same time, I offered the possibility to target the same deployment to a public cloud (AWS in this particular case) or to the private cloud (choosing between Openstack and vmware in this particular case). No duplication of the effort was needed, because the model you creat is not referred to a specific cloud as a target. It will be matched, when a user orders it, with one of the cloud avaliable for him or for his project.
So I was able to show the difference between a free, open source solution (Terraform) and a commercial product (Cloud Center) in a similar scenario.

The second option addresses different needs of the organization and offers a richer solution.
It’s up to you to evaluate which one fits your requirements better. 


Modeling, policies and multitenancy


One of the differences is that Cloud Center offers a graphical editor to model the topology and the dependencies among all the building blocks of your deployment.
You have a library of services (software applications from a repository, physical and virtual services like load balancers and firewalls).
Services can be dragged and dropped in the editor, then you set their properties and dependencies. 
The architecture of the application you're modeling can be based on a single server or a number of servers with different roles.
If the application architecture has multiple tiers, every tier gets its own attributes and policies: as an example, you can set the minimum and maximum number of instances in a cluster of web servers (or application servers or database servers). 
Autoscaling policies will tell the orchestrator to increase or decrease the number of servers based on metrics like consumption of cpu or memory, inbound/outbound traffic, etc.
Everytime the cluster changes, the orchestrator will modify the configuration of load balancers and firewalls accordingly: no manual intervention is needed.
Models are saved in the catalog and offered to users in a multitenant organization: every tenant is given a portion of resources (target cloud environments) and services (models available in the catalog to deploy applications) that the tenant administrator can offer to his own users and groups... and sub tenants. Every tenant cannot see other tenants' stuff.

a graphical editor to model the deployment of your applications
A graphical editor to model blueprints for application deployment in hybrid cloud

Dashbooard and Reporting


Every user has a dashboard that shows the consolidated information about all the applications he has deployed (or the other users in the same tenant), and can manage the lifecycle of all the deployments.

Of course the administrator of the system sees the global view including all the assets.
Active VM per cloud and per application are shown in the dashboard, as well as associated costs.



a unified dashboard for all your deployments in all the clouds
Cloud Center's Dashboard


A powerful reporting features allows to filter deployments and costs by user or group, application, environment and cloud.
Data can also be exported in different formats, to be consumed by humans and other systems.


powerful reporting allows for governance, showback and chargeback
Unified reporting



Architecture

The architecture of the Cloud Center product is based on two Virtual Machines: the Manager (CCM) and the Orchestrator (CCO).
The Manager is the engine where policies and application models are defined, and where the user portal runs. The Orchestrator lives within each of the target clouds (indeed, there is one CCO in each cloud region), receives commands from the Manager and executes them locally using the API of the cloud platform.
Cisco provides orchestrator images that are specialized for every cloud supported by Cloud Center.  So you have a single place to manage all your cloud resources, and a single model to maintain: you don't need a model, or a workflow, or a script for every target cloud where the syntax of that specific API is used. You create a single model, that is completely decoupled from the target of the deployment: this reduces the amount of work (a single model instead of many) and makes the maintenance of the model easier and more consistent (you don't have to evolve many models for the same application).

 
One manager drives a separate orchestrater for every cloud region you have access to
Cisco Cloud Center architecture


Comparison


Two solutions for the same use case, one for free and one at a cost?
Indeed they address different requirements: as described above, Cloud Center is for enterprise organizations that need to rationalize their usage of cloud resources. It is used by the corporate IT to provide flexibiliy and agility to their developers (within a governance model), to standardize the architecture of their projects based on blueprints (including what products, what versions, what setup configuration they prefer) and to get reports on consumption.

Service providers can use Cloud Center to broker third parties' resources, offering a single catalog to their customers. The hierarchical multi tenant organization and the sophisticated cost models that can be offered make it simple.

I suggest you to consider it if you are using, or plan to use, two or more cloud providers (counting also your private cloud or your virtualized data center). You will see an immediate benefit in terms of compliance and efficiency.

References