What works and what doesn't work in software development? For the first 10 years or so of my career, we followed a strict waterfall development model. Then in 2008 we started switching to an agile development model. In 2011 we added DevOps principles and practices. Our product team has also become increasingly global. This blog is about our successes and failures, and what we've learned along the way.

The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions. The comments may not represent my opinions, and certainly do not represent IBM in any way.

Friday, July 6, 2012

DevOps Days Open Space: DevOps for Legacy Code and Real Servers

I proposed the topic for this OpenSpace: DevOps for Legacy Code and Real Servers.  Here are some of the insights I gleaned from this session.

Legacy servers
  • Cloud platforms are evolving to manage real, legacy servers in addition to virtual machines.
  • Chef, for example, can manage both physical and virtual servers.  It can also manage clean OS installations as well as update existing servers.  There's a tool called Blueprint that will attempt to reverse engineer Chef automation for an existing server.
  • It's difficult to re-create systems that weren't automated in the first place.  However, it greatly reduces your risk if you invest the time and effort to do that.  What if the server was destroyed in a fire or something?
  • Sometimes people have even lost the source code for applications that are running in production.  That is a very risky state to be in.
  • Another option is to clone the system into a VM first, snapshot it, and then do your exploratory work on the VM.
  • You can also copy some of your production web traffic to your staging servers.
  • Or, you can start deploying new applications to VMs, and gradually shift your enterprise code to VMs.
Mainframe systems
  • Mainframes are the backbone of many legacy systems, and they are not going away.  
  • People who are used to working with mainframes have a different culture and language than people who are used to developing new web applications.  There's a communication gap to bridge before they can benefit from DevOps principles and practices.
  • One option is to just get an enterprise's web applications to adopt DevOps and punt on the mainframe applications.  But why can't we do the same thing for mainframe applications?
  • Mainframes have limited logging and monitoring systems.  Why?
  • Mainframes have limited tooling.  Why?
  • It's very difficult to see what's going in within an application.
  • It's very difficult to debug applications.
  • Deployments have to be completed with zero downtime.
  • LPARs, CICS regions, etc. could actually be considered a type of virtualization.  Is there a way we could make them behave more like VMs?
  • Could mainframe developers take some of the best practices from .Net and Java?
  • A more open place, like a university, might be more willing to experiment with DevOps first.
Universal Principles
These principles from DevOps can apply to legacy servers and mainframes just as easily:
  • Source Control Everything (including infrastructure code)
  • Version Control Everything (including infrastructure code)
  • Automate Everything (including infrastructure code)
  • Test Driven Development: Test First, Test Everything
  • Test for Operational Quality (performance, transaction load, security, etc.)
  • Agility
  • Focus on the Business Outcome, not the features or requirements
  • Improve teamwork between Dev and Ops
  • Collect metrics so you can find problems earlier