Skip to main content

Cloud Flexibility encounters IT Procurement Inflexibility


imageThe Cloud.  Whether it’s a mega-cloud provider in the public clouds, a private or managed cloud, on premises or off prem, cloud is all about flexibility.  Add an instance, add a service, it’s just a click.

(Note the dynamic cloud, the ability of an app to dynamically expand it’s resources as load increases, remains mostly hype.  While this was one of the first great promises of the cloud-o-sphere, it has not translated into reality.  Even further, just shutting down non-production environments during the night or on a schedule can be a significant cost saver – but is not offered by the mega-providers.  At least in this area 3rd party cloud support vendors have stepped in – but many are not aware of this and end up with lots of idle time on compute nodes.)

While the cloud providers offer that wonderful relatively instant service with a click, each one of those clicks carry a cost.  And cost means…IT procurement, the department focused on getting the best deal for their IT dollar and making those long term contracts that keep us operational.  And they have a process, a long process, for each item to be purchased.

Cloud flexibility means I can just add a node or VM, and add a backup or DB or firewall.  IT procurement means forms and weeks and reviews.

When we were purchasing compute capacity for the project for the year, which consisted of a series of servers and expensive software licenses, this made sense.  My purchase had significant cost and long term implications.

With Cloud accounts, my “purchase” can be “unpurchased” at any time (at least in the public clouds – private clouds often require some time commitment), and it can start small and grow as the capacity need grows.  (In traditional IT, “new projects” often purchased their first 2 years of servers in their initial purchase – that’s how it was done, nobody wanted their project to be in trouble due to insufficient resources, which often meant over purchasing, and until the project was in production under actual production load they were often unclear of the actual server capacity that would be needed beyond a high level guesstimate…that could easily be 50% off.)  With Cloud, we can start small and increase capacity with ease as the actual usage grows.

Obviously a significant positive NOT to have to buy capacity we’re not using today and may not use for 18 months.  AND we can increase to the ACTUAL need rather than a predicted need 2 years ago.  BUT…if we have to go through a full IT procurement process as we make those Cloud changes, we’re hobbled and unable to gain that value.

This is not theoretical.  Try having a cloud vendor procedure a Purchase Order and Statement of Work for “Add Managed Backup to Node, $129.95 per month” and “Storage Encryption Service, $39.95 per month” – the experience isn’t pleasant for anyone and makes Cloud use somewhat impractical UNLESS we return to the old approach of over-estimate everything we need and buy (allocate all those high volume nodes and services) up front.

It’s easy to overuse, over allocate, and not manage the cloud resources (not release resources and services no longer in active use), and IT procurement can provide necessary oversight and management of cloud resources.  But they have to come with new cloud flexible procedures to do so.

Otherwise, the value of cloud services is lost.

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider