Showing posts with label project. Show all posts
Showing posts with label project. Show all posts

June 14, 2016

Why don't you try Openstack (without getting your hands dirty)?


Is Openstack ready?
But, more important: are you ready for Openstack? 

 

are you ready for Openstack?


Openstack is mature (but complex).

Surveys and statistics show that Openstack is mature and provides a number of benefit to a broad spectrum of users, from small to large enterprises and service providers.  
Almost every professional in the IT (including CIOs and CTOs) knows the advantage that Openstack would offer to his organization.
But many are also aware of the complexity of the technology, the need for new operational processes and skills to set up and operate Openstack.
A scalable and reliable production environment is different from a lab where you explore the capabilities of the new platform.
The journey to a mature adoption of Openstack is not easy and you need to invest time and money.
In addition, when you hire people (or train yours), there is a possibility that another company steals them with the offer of a better salary, given the scarcity on the market.

So, many IT organizations - excluding cloud service providers, because that’s exactly their business - started wondering if it’s worth spending time in running the infrastructure, rather than running their business applications.
If you are not a cloud provider, that makes money selling IaaS, why should you dedicate additional effort to installation, monitoring, troubleshooting and release upgrades to ensure reliability and performances to your applications (that’s the only asset you should care of, because your business relies on them)?

Focus on your real business.

Why don’t you delegate all the responsibility to a provider, signing a contract that puts the above tasks and SLA on them?
Doing so, you would be free to use Openstack, getting all the benefit that you expect from it, without the burden of the learning curve and the organization implied by the Openstack adoption.
You would focus on using the infrastructure to develop and run your applications, no longer on running the infrastructure itself.

delegate the responsability of the service to a specialized provider


That is called a managed service.

You own the infrastructure and exploit the value of your Data Center assets (you don’t just drop them to escape to a public cloud).
An expert team (it’s just their business) installs Openstack in your DC and operates it everyday in a HA (high availability) configuration, granting 99.99% uptime.
They take care of all the version upgrades and the compatibility of all the new features released by the community by using a certified configuration.
The user interface (the Horizon console, the Openstack API and command line interface) is available to you so you can deploy virtual server instances, networks, storage at will. You get complete and granular reporting on the health of the system and its performances.
You are the owner, but you don't get your hands dirty with the complex stuff   :-)
You pay them for the service, they grant you the SLA.

Just taste if you like Openstack.

The approach described above can be a strategical decision, because you want to focus on your business applications.
But you could also use this trick to stand up a Openstack environment in very short time, test it (I mean if your organization adapts to it, if your applications run well, if the operational model - IaaS at home, on your infrastructure, no cloud provider lock in - is good for you, if your developers are more productive) for a while, e.g. 3 or 6 months, and finally decide if you want to adopt it. 
At that time you can choose between continuing with the managed service or doing it yourself.
It is a zero risk trial of the technology and of the processes: if you don’t like, you haven’t wasted any time and effort to stand it up so you can happily retreat.
You simply do not renew the service contract and that’s all: you have made a real informed decision about the adoption of Openstack.


no provider lock in for your cloud



Cisco Metapod: Openstack as a managed service.

Cisco has a offer that allows you to do what I described above, that comes from the acquisition of a company whose business was exactly Openstack as a managed service, on your premises.
They had a Openstack distribution of their own, optimized and hardened to provide a smooth and effective service.
Now, thanks to a strong partnership with Red Hat, the team is using the Red Hat Enterprise Linux Openstack distribution (OSP8, based on Liberty).

The essential features of this service are:
- easy start: entry level contract for 90 days
- ready to go live in 2-3 weeks from the engagement
- HA included
- the infrastructure to run Openstack can either be yours or provided by Cisco
- both the Openstack API and the AWS API are exposed by the system

And the infrastructure to run it in production can be as simple as this:


the servers and the switches you need to run Openstack


The value you can get from it: a well defined SLA, installation included, maintenance and upgrade included, no cloud provider lock in.

advantages of the Cisco Openstack managed service: Metapod


I believe that Cisco Metapod is a very good option to start with Openstack.
You can put your foot in the water to test the temperature, then decide to take a bath if you like it.

you can decide if you like Openstack without investing in a big project


References

Openstack users survey 
Cisco Metapod official page
Cisco and Openstack on this blog 

February 2, 2016

Governance in the hybrid cloud

This post shows how a company can solve one of the main issue that CIOs have today: the so called Shadow IT.



This term defines the usage of cloud services (either IaaS, PaaS, SaaS) in a project without any control, decided by the application developers or designers because they think it's beneficial for the agility of the project.



Sometimes leveraging available services is really good for a project: it's useless to rebuild something that is easily available as a standardized service. Even when the IT organization of your company (or your customer, if you're a consulting company) provides the building blocks that you need for your architecture, it could be difficult to get approvals or a fast enough provisioning.
So there are different valid reasons to incorporate public cloud services, we can't blame those that try to fully exploit a Service Oriented Architecture.



Unfortunately this way of assembling applications using any available resource you consider useful creates troubles for the IT organization.
Besides additional costs, that arrive as a surprise (developers bill to a personal credit card or to a corporate one, but sooner or later those costs will be factored into the cost of the project), some corporate rules could be violated without even being aware.
Just a few examples: storing reserved data in a database outside the company's datacenter, or invoking services without encrypting the input/output parameters, not granting the end to end High Availability or Disaster Recovery of the entire system.


The subject of costs can be easily underestimated: at development time you need very limited cloud resources, for a limited time. It costs near to zero, before the application goes to a full production environment. But after that, it will need more computing power and more storage, and of course more bandwidth, to serve all the users. Cloud services tend to increase surprisingly in these conditions.

So the CIO has a dilemma: to try to block, or limit, the usage of cloud services - limiting cost and risk but appearing like the one that slows the innovation down and prevents the lines of business from achieving their business result - or to allow maximum freedom, with the additional risk of becoming not relevant because they can bypass the IT organization?


There is a solution in the middle: IT could offer a facilitated access to cloud services, adding them to a Service Catalog where users can self serve, granting compliance by design.
Public cloud services will be selected based on agreed architectural and security policies, they will be documented, audited and reported, eventually subject to approval from a financial standpoint.



One possible implementation of such a catalog can be based on the Cisco ONE Enterprise Cloud Suite, as I did in a recent project at one of my customers.

The Cisco ECS is a reference architecture comprising one flexible Service Catalog, a automation engine and a platform for hybrid cloud that allows the extension of your datacenter into a kind of "bubble" in the public cloud. In case you need additional power, you can burst your workloads into the virtual private datacenter keeping all the security and networking policies you defined in your private cloud: even the IP address of the virtual machines does not change, as long as the secure segmentation of the application layers and any other policy.

I'm not going to describe the Cisco ECS, because you can find the official documentation here.
I'm showing how we extended the services offered in this catalog with CliQr Cloud Center for managing the provisioning and the lifecycle management for applications in the cloud. So the great capabilities of Cisco ECS in term of IaaS are complemented with the offer of the deployments of simple or complex applications and software stacks, that you can target at any cloud just selecting from a drop down list.

I mean that the template for the deployment is not cloud dependent,  and the user can - within the limits of his authorization level and the corporate policies - choose to provision it in the private cloud (e.g. on vmware in the corporate data center) or in the public cloud (e.g. AWS or Azure).
The lifecycle operations (start, stop, resume, delete, etc.) will be also offered as well as the migration to a different cloud: from private to public after the QA test is done and you're ready for production, from a public provider to a more convenient one, etc.

THIS POST HAS BEEN REDACTED

After the publication of this post Cisco announced the intent to acquire Cliqr (not because of the post :-) ), and our policies require that we don't speak of deals while they are in progress. I can't show the way we integrated Cliqr in this project because the official statement on the reference architecture will be communicated by Cisco after the acquisition is eventually completed.


References:
http://blogs.cisco.com/datacenter/introducing-cisco-one-enterprise-cloud-suite
http://www.cliqr.com/



September 6, 2015

The Phoenix Project - how DevOps can change your life

It’s been a long time since I did my last post: as promised, I only post information from my experience in the real world and I avoid echoing messages from marketing   :-)
I’ve not been at rest, though, but I’ve worked at customer projects that can’t be mentioned publicly (yet).

But I’ve also been in vacation and I could finally read a great book, “The Phoenix Project”. 
It is a novel and a very educational reading at the same time.
I wholehearted recommend you to read it (though I’m not earning anything from the book) because I enjoied it a lot and I learned important lessons that deserve to be spread - for our common benefit as IT community.








You are not required to be a IT professional but, if you are, you will benefit the most and it will recall many familiar stories.
Since I’ve led some mission critical projects, and my skin is still impressed with both tragedy and triumph, this story reminded me those great moments. 
If you are new to DevOps, you can read my introductory posts in this blog.

Essentially, The Phoenix Project describes the evolution of IT in a company that, on the verge of a complete failure, pioneers DevOps and revolutionizes the way they work.
The impact on the core business is huge and their strategy creates a gap with the competition thanks to agility and flexibility.
Also personal lives are affected because the new organization ends the tribal war among Development, Operations, Security and the business stakeholders: they establish respect, trust and satisfaction for all the involved parties.
Of course the DevOps methodology is not a magic wand that makes the miracle for them: it is the outcome of a new way of thinking and working together.
This is a story of people, rather that technology.

If every IT department put themselves in the shoes of the others, instead of finger pointing, they can help each other to reach a common goal.
If the whole IT is not a counterpart of the LOBs but is a partner (understanding why they are asked something instead of focusing on how to do it), they can offer a huge value to the company… and be highly rewarded (see the coup de théâtre at the end of the story).
This would stop the “dysfunctional marriage” between two parties that don’t understand each other and suffer from a forced relationship.
In my experience, most of the business people see the IT as the provider of a services that is never satisfactory.
On the other side, IT sees that business people don’t understand the complexity and the effort required and ask for impossible things.
In most cases, they are bound to a traditional way of working and don’t even raise their head to see that they already own what’s needed to win.
They are overwhelmed by current tasks, troubleshooting and budget cuts, so they can’t think strategically.

The great idea, here, is importing the concepts and the experience from Lean Manufacturing into IT.
They start considering the IT organization similar to a production plant and optimizing its organization.
Finding bottlenecks and avoiding rework are the first steps, then automation follows to free the smart guys from the routine work and so the quality skyrockets.
At the end of the story the release of new features required by the business no longer takes months (and high risk at the roll out) but they can deploy 10 project builds per day!

That is not impressing if you think that these days some companies achieve 1000s of deployments per day thanks to Continuos Integration and Continuous Deployment.
But it is light years ahead of what most of my customers are doing, though some are exploring DevOps now.
Of course, one organization cannot change overnight.
You shouldn’t see the adoption of DevOps as a single step, and be scared by the effort.
In the book, they learn gradually and improve accordingly: you could do the same.
They go through a process that is made of Three Ways, until they master all.
A brief description of the three ways follows, thanks to Richard Campbell:

The First Way – Systems Thinking
• Understand the entire flow of work
• Seek to increase the flow of work
• Stop problems early and often – Don’t let them flow downstream
• Keep everyone thinking globally
• Deeply understand your systems

First Way Goals
• One source of truth – Code, environment and configuration in one place
• Consistent release process – Automation is essential (one click)
• Decrease cycle times, Faster release cadence

The Second Way – Feedback Loops
• Understand and respond to the needs of all customers (internal and external)
• Shorten and amplify all feedback loops
• With feedback comes quality

Second Way Goals
• Defects and performance issues fixed faster
• Ops and InfoSec user stories appear as part of the application
• Everyone is communicating better
• More work getting done

The Third Way – Synergy
• Consistent process and effective feedback result in agility
• Now use that agility to experiment
• You only learn from failure – So fail often, but recover quickly

Third Way Goals
• Ability to anticipate, even define new business needs through visibility in the systems
• Ability to test and optimize new business opportunities in the system while managing risk
• Joy

You should not think that The Phoenix Project is a technical book: though I’ve learned new things or reinforced concepts I knew already, the value I found in it is motivational.
It really moves you to action, and you want to measure the immediate improvement you can get.
More, you want to partner with other stakeholders to achieve common goals.

The Essence of DevOps
• Better Software, Faster
• Pride in the Software You Build and Operate
• Ability to Identify, Respond and Improve Business Needs

My final take from this story is that everybody in the IT (like in other fields) should:

- take risk and innovate - if you fail, probably the result would not be worse than staying still
- invest time - at cost of delaying important targets - to think strategically: the return will overpay the effort
- study what others have done already: learning by examples is much easier
- always try to understand your counterpart before fighting by principle, there could be a common advantage if you shift your perspective

Some useful references:
Other DevOps books:
- Visible Ops Handbook (Gene Kim)
- Web Operations (Allspaw/Robbins)
- Continuous Delivery (Humble/Farley)
- Lean Startup (Eric Reis)

May 23, 2015

A powerful DevOps tool: Ansible

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.
At the Openstack Summit in Vancouver I attended a great session presented by two Cisco colleagues:
Juergen Brendel (@brendelconsult), David Lapsey (@devlaps) both from Cisco Metacloud.
These are my notes, that you could find useful as a easy introduction.
But I suggest you to watch the recording of their session at the end of this post, because it is very educational.

Configuration Management tools
They are better than scripts, that in turn are better than written manual instructions, that are better than a seasoned administrator's memory.
CM tools describe the desired state of a resource (i.e. a server) via assertions (ensure that… exists/installed/...): a declarative way to provision resources.
Comparison of existing tools:
puppet dates 2005, chef dates 2009 - they are powerful and rich
salt dates 2011, ansible dates 2012 - they are easy and quick

Ansible
It's written in Python, uses YAML to create Playbooks (description of the desired state)
It's simple: no central server to maintain, no keys management, NO AGENT on the managed servers - but requires ssh and python on the target server (powershell support is coming).
Ansible executes commands in explicit order (so there are no race conditions due to dependencies).

Modules
Modules are pieces of code that do a single thing.
There are hundreds of modules available to reuse.
They’re copied to the target server at runtime, executed there (they return results) and then deleted.

Inventory file
It defines hosts and groups them so that you can apply same commands to all at once.
Adhoc commands apply to groups - example: ansible -i hosts europe -a “uname -a", where europe is a group.

Playbooks
they are written in YAML and tell Ansible what to do (a sequence of tasks)

Projects layout
A Ansible project is made of:
config files
inventory files
group variables
yaml file

Roles
contains tasks, handlers, templates, files, vars
apply to servers (that have the same role)
can be included in playbooks

Usage of API
to manage infrastructure and services
there are modules available for public cloud and private cloud management systems

Vagrant
Vagrant is a tool that matches Ansible very well:
it is used to create VM in cloud
it can use Ansible as a provisioner
written in Ruby
commands:
vagrant up - creates the vm
vagrant provision - calls Ansible

Takeaways
A single Ansible playbook can be used to deploy apps locally and in the cloud
Download Ansible for free from Github.


February 12, 2015

DevOps - Tools and Technology

This post is the continuation of the DevOps - Operational model post in this blog.

We have seen how DevOps processes and organization can help the agility of IT, enabling a huge value for the business.
Let’s investigate the tools that smart organizations use to implement DevOps in the real world.
And let’s try to understand how, in addition to code management, the lifecycle of a sw application can be optimized by managing the infrastructure as code.
At the end of the day, we want to apply the following picture to the infrastructure as well.




Usually different environments are created to run application, often cloned for each Tenant (customer, project...): development, integration test, QA test, production, Disaster Recovery
The infrastructure must provide similar topology and functions, with different scale and HA requirements.
Those environments are sometimes used for few days, then they are no longer needed and the resources could be reused for next project.
If we were able to generate a new environment "end to end" when it is required, and to release all the resources to a shared pool, this  would help a lot in the optimization of resources usage.
The economy of scale provided by shared infrastructure and resource pools will add to the simplicity and speed of the operations.

The following picture shows the cycle of the builds (for both the sw application and the infrastructure) that optimizes the time and the resources.




There are a number of tools and solutions that can help automating this process.
Some apply to specific phases, other to the end to end DevOps.
Also collaboration tools help the team(s) to work together for their own and the entire company's benefit: from http://www.collab.net/solutions/devops



The most used DevOps tools, as far as I know from direct experience and investigation, are jenkins, vagrant, puppet, chef.
Here is another possible chain of tools that cover the entire process:


Stateless Infrastructure (also known as SDDC)

We understood that the maximum benefit comes from being able to create and destroy environment on demand, allocate resources just when needed (we can also consider Disaster Recovery as a important use case in this scenario, but in that case you should also ensure that data have been replicated before the event).

Infrastructure as code is a core capability of DevOps that allows organizations to manage the scale and the speed with which environments need to be provisioned and configured to enable continuous delivery.
Evolving around the notion of infrastructure as code is the notion of software-defined environments.
Whereas infrastructure as code deals with capturing node definitions and configurations as code, software-defined environments use technologies that define entire systems made up of multiple nodes — not just their configurations, but also their definitions, topologies, roles, relationships, workloads and workload policies, and behavior.

Stateless Computing and Stateless Networking are important innovations that some vendors (Cisco could be considered a leader here) brought to the market in last 5 years.
Policy based configuration and the availability of software controllers for all the components of the architecture allow the separation of the modeling from the physical topology.

Servers

As an example, UCS servers (up to 160 in one management domain, but domains can be joined to share resources and policies) are stateless.
You can imagine each server (either a blade or a rack-mount server) as a dumb piece of iron, before you push its identity, its features (e.g. number, type and configuration of the network interfaces) and its behavior as a piece of configuration.
It is like adding the soul to a body.
Later you can move the same soul to a different body (maybe more powerful, such as from a 2-CPU server to a 4-CPU one). The new machine will be restarted as if it was the same.
This can be useful to recover a faulty server, to do DR but also to repurpose a server farm in few minutes (and eventually restore the previous state the day after).
The state (identity, features and behavior) is defined by a XML document that can be stored, versioned and managed as code in a repository (other than in the embedded UCS Manager).
This abstraction of the server from the actual machine makes the management easier and was the main factor for the incredible success of UCS as a server platform.

Networks

Similarly, in the networking domain, we have had a quantum leap in network management with Cisco ACI (Application Centric Infrastructure).
For those that have not met ACI yet, I have published a “ACI for Dummies” post.
In few words, ACI brings the management of physical and virtual networks together.
It has a very performant and scalable fabric, made of spine and leaf switches, that are managed by a software controller called APIC.
APIC also integrates the virtual switches in the different hypervisors, so that its policy model can be extended to the virtual end points.
A GUI is provided to manage APIC, but essentially you would drive it through the excellent open API offered to orchestration systems and - of course - DevOps tools.
XML (or JSON) artifacts can be stored in a repository as code, and pushing them to APIC will create your new Data Center on the fly.
You can create new Tenants with dedicated resources, or deploy the infrastructure for a new application in such a way that it is isolated (in terms of security, performances and stability) from others, though running on a shared infrastructure.
It would take just the time of a REST call, where you push the new policy to the controller.
And of course you could use the same templates in the different environments: development, integration test, QA test, production, Disaster Recovery

The previous generation of network devices (e.g. the Nexus family) can be managed in a DevOps scenario as well.
They offer API and have puppet agents onboard. And a version of the APIC controller has been created also for networks outside ACI (APIC-EM - https://developer.cisco.com/site/apic-em/discover/overview/).
The Cisco DevNet community prodives a lot of information and samples at https://developer.cisco.com/site/devnet/home/index.gsp

I wrote a short post on Ansible here: http://lucarelandini.blogspot.com/2015/05/a-powerful-devops-tool-ansible.html where a great recorded session from the Openstack Summit is linked.

You might be interested also in my post on DevOps, Docker and Cisco ACI.

 

February 2, 2015

DevOps - Operational model

This posts is the continuation of the Why DevOps: definition and business benefit post.

As it happens in other areas of the IT, technology is an important factor for success but it is not the most important one.
The human factor is what really makes the difference for successful projects.
So skills, common goals, organization and governance (and a business strategy) will make you win with any tool.
But if you lack them, the best technology in the world will fail to provide a positive outcome.

In this post we’ll see how a lot of companies have adopted DevOps practices, using a variety of products (that we'll examine next time), and they got a important return.

Why Project Fail: The Business Management Chasm

Question: Over the past year, what percentage of your current projects have failed to meet your success criteria?
Answer: 19% (n=84)
Question: Why?
Answer:
  1. Poor requirements gathering/scope creep: 23%
  2. Lack of resources (staff and budget): 21%
  3. Changed business priorities: 19%
  4. Lack of business stakeholders ownership: 16%
  5. Testing delays: 10%
  6. User requirements changes: 10%
  7. Vendor performance: 1% 
If you sum up points 3 and 4 you get 35%.
You can easily see that if the application lifecycle was leaner and faster, they wouldn't lose their chances for success.
Quick wins are the most important key to lead a project to its final goal: you should deliver a tangible value as early as possible, to keep traction, and be able to react to changes


In this post we’ll see how a lot of companies have adopted DevOps practices, using a variety of products, and they got a important return.

Businesses today are moving toward continuous delivery as a methodology and tool to meet the ever-increasing demand to deliver better software faster. Continuous delivery, with its emphasis on keeping software in a release-ready state at all times, can be seen as a natural evolution from continuous integration and agile software development practices. However, the cultural and operational challenges to achieve continuous delivery are even greater.
For most organisations, continuous delivery requires adaptation and extension of existing software release processes. The roles, relationships, and responsibilities of people across the organisation may be impacted. The tools used to deliver, update, and maintain software must support automation and collaboration properly, minimising delays and providing tight feedback cycles across the organisation. While these changes can be a huge challenge to implement for organisations that must live within regulatory and operational constraints, there are many practical steps you can take to make real progress today.

With that in mind, here are 7 key pre-requisites organisations should consider when making a successful transition to Continuous Delivery.
1. Make Sure Development, QA & Operations Teams Have Shared Goals & Communicate
2. Get Continuous Integration Right Before Making The Step To Continuous Delivery
3. Automate & Version Everything
4. Share Tools & Procedures Between Teams
5. Make Your Application Production-Friendly: Make Deployments Non-Events
6. Make Your Infrastructure Project-Friendly: Empower The People & The Teams
7. Make Application Versions Ready To Be Shipped Into Production

Continuous Delivery is not just about a set of tools, ultimately it is also about the people and organisational culture. Technology, people and process all have to be aligned to make Continuous Delivery successful in any organisation, a collaborative approach is fundamental to its success. If organisations are to reap the rewards of a more fluid, automated approach to software development that can also provide them business agility – they need to implement these best practice steps on the path to Continuous Delivery.


(1) “ Emphasize the performance of the entire system” – a holistic viewpoint from requirements all the way through to Operations
(2) “Creating feedback loops” – to ensure that corrections can continually be made. A TQM philosophy, basically.
(3) “Creating a culture that fosters continual experimentation and understanding that repetition and practice are the pre-requisites to mastery”
These are excellent guidelines at a high level, but we’d like to see a more operational definition. So we’ve made up our own list!
As a starter – we propose that;
  1. You must have identified executive sponsors / stake holders who you are actively working with to promote the DevOps approach.
  2. You must have developed a clear understanding of your organisation’s “value chain” and how value is created (or destroyed) along that chain.
  3. You must have organizationally re-structured your development and operations teams to create an integrated team – otherwise you’re still in Silos.
  4. You must have changed your team incentives (e.g. bonus incentives) to reinforce that re-alignment – without shared Goals you’re still in Silos.
  5. You must be seeking repeatable standardized processes for all key activities along the value chain (the “pre-requisite to mastery”)
  6. You must be leveraging automation where possible – including continuous integration, automated deployments and “infrastructure as code”
  7. You must be adopting robust processes to measure key metrics – PuppetLab’s report focuses on improvement in 4 key metrics – Change Frequency, Change Lead Time, Change Failure Rate and MTTR. We suggest Availability, Performance and MTBF should be in there too.
  8. You must have identified well-defined feedback mechanisms to create continuous improvement.


Of course, you will need some investment to get there. It can be gradual and the payback from the adoption of DevOps will help next steps:



Two main processes that make DevOps work are Continuous Integration and Continuous Delivery.

Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies with a shared mainline several times a day.
CI was originally intended to be used in combination with automated unit tests written through the practices of test-driven development. Initially this was conceived of as running all unit tests in the developer's local environment and verifying they all passed before committing to the mainline.
Later elaborations of the concept introduced build servers, which automatically run the unit tests periodically or even after every commit and report the results to the developers.
In addition to automated unit tests, organisations using CI typically use a build server to implement continuous processes of applying quality control in general — small pieces of effort, applied frequently. In addition to running the unit and integration tests, such processes run additional static and dynamic tests, measure and profile performance, extract and format documentation from the source code and facilitate manual QA processes. This continuous application of quality control aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development.



Continuous Delivery (CD) is a design practice used in software development to automate and improve the process of software delivery. Techniques such as automated testing, continuous integration and continuous deployment allow software to be developed to a high standard and easily packaged and deployed to test environments, resulting in the ability to rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead. The technique was one of the assumptions of extreme programming but at an enterprise level has developed into a discipline of its own, with job descriptions for roles such as "buildmaster" calling for CD skills as mandatory.



Continuous delivery defines a deployment pipeline as a set of validations through which a piece of software must pass on its way to release. Code is compiled if necessary and then packaged by a build server every time a change is committed to a source control repository, then tested by a number of different techniques (possibly including manual testing) before it can be marked as releasable.


Characteristics of a Successful DevOps Team

No matter how you’re using DevOps practices — whether your company has a DevOps department or cross-functional teams that share DevOps tools and practices — there are distinct characteristics of DevOps teams that align with high IT performance.
Here’s a checklist that’s food for thought (and fuel for future improvement!).
These points are drawn from the 2014 State of DevOps Report, and from suggestions of DevOps experts like Paul DuvallJez Humble and Joanne Molesky

Effective DevOps teams don’t think of issues as “someone else’s problem”. 

Developers, IT operations, quality assurance engineers, database admins, and business analysts collaborate, and everyone checks code into the version control system. Everyone is part of the delivery process — and held accountable for it.

We Automate Build, Deployment, and Testing Phases.

With automation, you reduce the chance of human error as you transition code from one phase to the next. Because you’re automating configuration of all environments, you’re minimizing issues caused by writing code in a development environment that is different from the production environment.

Our Culture Reflects Open Communication and Collaboration.

Developers and IT operations attend planning meetings, standups, and release postmortems. Developers share responsibility for writing testable and deployable code, and if code fails in production, the team is kept in the loop, working together to review causes and identify solutions. 

We Have Routine Deployment Processes and Shared Monitoring Practices.

Team members can accurately report how long it’ll take to deploy a new feature, or even a few lines of code, to production. They can identify and remove roadblocks, without a lot of red tape. They understand the key performance and availability metrics to measure, and track them against larger business goals.

We Implement a Continuous Delivery Pipeline.

Continuous delivery, implemented right, lets you release changes continually to production. That lets you testnew features with real customers, facilitating quick feedback about how they’re being used. Continuous delivery helps companies make better business decisions and move more quickly than their competitors.

We Use Version Control For All Production Artifacts.

Version control systems help you track changes and quickly find the source of an error, reducing time to recovery. Everything required to launch a change into the production environment must be checked into version control, including application code, application and system configurations, tests, and deployment scripts.

We Trust Each Other, and Collectively Enable Continous Improvement.

We deliver on our promises to the business, and to our customers. We continually work on developing collaboration, clear communication and trust between team members. We are continually learning and improving as a team. Most important of all: We spend less time fighting fires and more time focusing on great work.



When it’s well executed, continuous delivery allows an organization to respond more quickly to its market and to customers, both internal and external. It also makes life saner for people in IT operations, software development and quality testing teams. Instead of long periods of development punctuated by looming deadlines, big dramatic releases and panicked remediation of serious bugs, software releases are small, predictable and less dramatic… even boring :-)

Top Benefits of Continuous Delivery

Deliver software with fewer bugs and lower risk.
When you release smaller changes more frequently, you catch errors much earlier in the development process. When you implement automated testing at every stage of development, you don’t pass failed code to the next stage. And it’s easier to roll back smaller changes when you need to.

Release new features to market more frequently — and learn.
Releasing new features early and often — even in a minimally viable state — means you get more frequent feedback, giving you the ability to iterate and learn from your customers. Enlisting customers as development partners gives them a sense of co-ownership and loyalty, and makes them more likely to forgive when you stumble.

Respond to marketing conditions more quickly.
Market conditions change constantly. Whether you’ve just discovered a new product is losing money, or that more customers are visiting your site from smartphones than laptops, it’s much easier to make a fast change if you are already practicing continuous delivery.

Life is saner for everyone: IT operations, software development, QA, product owners and business line owners.
Continuous delivery means the responsibility for software delivery is distributed much more widely, and this shared responsibility and collaboration make life better. Continuous delivery also take a lot of stress out of software release. Releasing smaller changes more often gets everyone used to a regular, predictable pace, leaving room tocome up with ideas and actually enjoy your the work. Best of all, a successful release becomes a shared success, one you can all celebrate together.


In next post, we’ll discuss the most used tools for DevOps and how the infrastructure can be managed “as code”, that means dynamically provisioned creating the needed environment every time you deploy a new version of the code.
Link to the DevOps - Tools and Technology post.

Sources:

January 25, 2015

Why DevOps: definition and business benefit

Can you just imagine the magic of pushing one button and see your company’s new project materialize in a production environment? The software application code compiled and built, the data center infrastructure (heterogeneous and complex) set up, the application deployed and tested - ensuring compliance - and the business stakeholders given their beloved new campaign ready to use?



I’m working with colleagues and customers to better understand why there’s so much interest around DevOps, what are the business benefit and useful technology.
Here is that state of our reasoning, along with some notes that I collected in my research.

To make the matter easier for people that don’t have experience in managing the software release cycle, I imagined to take a triangle approach: analyse the business drivers that need to be addressed, what is a operational model that could provide the expected results and, finally, what is the enabling technology. With this top-down approach, understanding the concept gradually should be easier for IT professionals that are not expert in the field.



Business Drivers

In every company, Lines Of Business have a dream: have a new solution live in 1 month. It could be a marketing campaign, a new service for their customers, a process to produce new goods.
They think they just need smart developers and the availability of the required infrastructure, that given the spending on IT should not be an obstacle.
Unfortunately, sometimes they feel that IT is not efficient enough. It’s not a matter of technology, but of organization.

Some notes from https://puppetlabs.com/blog/why-every-cfo-should-advocate-devops (Bill Koefoed, the author, is the chief financial officer of Puppet Labs).
IT is the manufacturing of the 21st century. Let’s face it, most products and services these days depend on software, from social media to teleconferencing to household appliances that interact via the internet.
To get ahead of competitors, you have to get your new products and services out fast, test them for customer response, and quickly update to satisfy customer desires. Even as you’re increasing your rate of output, you have to reduce flaws, whether in delivery or the product itself.
That’s why DevOps is so important: The tools, practices and cultural orientation of DevOps enable greater efficiency in IT. Our 2014 State of DevOps report bears this out, both in terms of software throughput and business results. From the standpoint of throughput, we validated last year’s findings that high-performing IT teams (as defined by deployment frequency, lead time for changes and mean time to recover from failure) deploy up to 30 times more frequently than their lower-performing peers, with 50 percent fewer failures. This year, the most provocative finding was the strong connection we found between IT performance and financial performance. Companies with high-performing IT teams got better business results, as they were:
  • 3.3 times more likely to have met or exceeded the company’s productivity goals.

  • 1.6 times more likely to have exceeded company profitability targets.


In this post I will describe DevOps in gereral terms, with some focus on the Business Benefit.
In my next posts I will approach the Operational Model and investigate the technology that can help. Not only tools for Continuous Integration and Continuous Delivery, but also the concept of “infrastructure as code” that allows a flexible and agile use of the infrastructure resources in the same cycle as the software for the applications.


DevOps is a software development method that stresses communication, collaboration (information sharing and web service usage), integration, automation and measurement between software developers and Information Technology (IT) professionals. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services and to improve operations performance - quality assurance.

The specific goals of a DevOps approach span the entire delivery pipeline, they include improved deployment frequency, which can lead to faster time to market, lower failure rate of new releases, shortened lead time between fixes, and faster mean time to recovery in the event of a new release crashing or otherwise disabling the current system. Simple processes become increasingly programmable and dynamic, using a DevOps approach, which aims to maximize the predictability, efficiency, security, and maintainability of operational processes. Very often, automation supports this objective.

Development methodologies (such as agile software development) that are adopted in a traditional organization with separate departments for Development, IT Operations and QA, development and deployment activities, previously do not have deep cross-departmental integration with IT support or QA. DevOps promotes a set of processes and methods for thinking about communication and collaboration between departments.



The adoption of DevOps is being driven by factors such as:
  1. Use of agile and other development processes and methodologies
  2. Demand for an increased rate of production releases from application and business unit stakeholders
  3. Wide availability of virtualized and cloud infrastructure from internal and external providers
  4. Increased usage of data center automation and configuration management tools

Use Cases

Automating the release of a complete system can provide advantages in the following situations (a partial list - if you have more reference, please add a comment to this post):
  1. daily builds in the development environment
  2. move the system to the integration environment and to QA
  3. regression testing after a patch
  4. move a system from testing to production
  5. deploy a copy of the system for a new tenant
  6. copy a system to Disaster Recovery
Keep in mind that, thanks to the management of "infrastructure as code", you can have the end to end system managed this way... not only the software code.

Adoption in the world
DevOps is ramping in the US, it seems to be a little late in Europe - as many innovations in the IT.
Companies that benefit already from the introduction of this methodology are:
Google, Amazon, Netflix, Facebook, Twitter, Pinterest, Bank of America, Cisco and more...

Industry headlines tell us every day that companies rise and fall on moments of infectious delight and irritated disappointment. It's not enough to have a great idea and execute on it once. You have to execute, get feedback, refine, and execute again - and again and again. To keep competitors from grabbing a piece of your market, you need to cycle with ever-increasing speed and agility.

Multiple, independently conducted research studies show that, not only are enterprises already adopting DevOps, they are achieving substantial outcomes.
One such study, conducted by independent research organization IDG, shows that enterprises (measured by having more than $500 million in revenues) are adopting DevOps at an even faster rate than smaller businesses. Another study, conducted by independent research firm Vanson Bourne, found that large enterprises are not only adopting DevOps, but more than 90% have seen or expect to see significant benefits, with quantifiable improvements in delivery speed, development and operations costs, defect detection, ability to innovate, and many more, ranging from 17% to 23%. Then there is additional research from InformationWeek, which also shows high rates of adoption and benefits for large enterprises (measured by having 5,000 or more employees).

In next post, I’ll try to define a operational model:
what teams are involved
what processes do you need
what information do you need 
what roles do you need
what skills do you need 

Link to next post: DevOps - Operational Model.

Sources:
1 - https://puppetlabs.com/blog/why-every-cfo-should-advocate-devops (Bill Koefoed, the author, is the chief financial officer of Puppet Labs.)