Why a Dichotomy? In it’s latest IaaS Magic Quadrant report, Gartner now defines two distinct cloud implementation types, Mode 1 and Mode 2; Mode1 refers to what is often called “Lift and Shift” of a workload, while Mode 2 refers to workloads that are designed to use a “true” cloud paradigm.  I thought it worthwhile to discuss this and understand what and why there is a difference. This blog may be a little long, but I find the subject worthy of some discussion.

The promise of the cloud was initially all about scale-out, an ability to elasticize workloads, instead of compute being data bound, it could be data driven, data could be collected from many places, lower cost storage could allow the vast data collection to be amalgamated (Data Lakes), but as this data was unstructured there had to be new paradigms to work with it. By using NoSQL databases that did not need a predefined structure or Schema, it was possible to compare disparate data sources, and work with new paradigms like Map Reduce, to get that data into a somewhat organized format where it could be analyzed, machine learning algorithms could be applied, and predictive analytics allowed the Data Scientists to discover all sorts of new correlations.  This was achieved by a scale out paradigm of shared-nothing, redundant commodity compute instances on low cost hardware in a pay per use model – need more compute; add instances, need less; remove them.  To achieve this the technology rapidly progressed into In Memory Computing, Data Grids, and other paradigms leading to a number of Open Source projects to rapidly take computing to a whole new level.

The platform and ideas surrounding this then started to spawn new ideas; an organization could take an application that was designed to scale-up, and re-engineer or rebuild it to a scale-out model, a Multi-tenant SaaS paradigm, and then sell the application in a pay per use model.  Applications could also be designed to be elastic, as more compute memory, storage was needed it could be added (or subtracted). But this still required some pre-allocation of compute, and even with the promise of Virtualization, the model was heavy.  Enter containers, a low overhead model where everything required for a function could be compressed int a single unit, Operating System, Code etc. and spun up on demand to complete it’s function and die, the concept of Microservices. The model enables rapid deployment of changes fixes etc. as only that single unit of “code” needs to be changed, and once it’s pushed the new code is in place next time the container is used.

However to get the above Cloud scale applications built is not an overnight process, so many organizations are not yet taking advantage of what the cloud has to offer.

Enter the Hype, the “we have to be in the (Public) cloud”, “it will save us money”, “it will make us more efficient”, “we can do business better”; and the subsequent push to get everything out of the data center and into the Cloud.  What does that really mean, and how has it changed the face of the cloud?  Now we have organizations moving their siloed scale-up application from one data center to another.  The major Public Cloud providers have suddenly all started offering, bigger and bigger instances, the idea of scale out on low cost hardware, just doesn’t work for a “legacy” application and database built for scale up, but as the market is dictating that everyone move to the cloud, lift and shift is becoming a reality, and I can buy a BIG instance to put my application on, I also need gobs of bandwidth, and a new security model so my users can run the application.  Great I got rid of my in house Data Center, but I’m now paying as much (if not more than before). My head count isn’t changing, because I still need as many people as before because all this scale up compute still needs to be managed, and I need new skill sets to manage it.  Unless I took the steps to ensure proper governance (including security), and orchestration my move to the cloud, is the not the success I expected.

The next point I’d like to touch on is that many large Vendors are busy Cloudwashing (see my earlier blog) their offerings, (In my opinion Oracle may be one of the biggest offenders here). They are taking their scale up legacy applications, putting some sort of multi-tenant face on them, hosting them in a data center and they have instant Cloud versions. Of course they are not really any different, but from a customer POV they get out of having to host, maintain, upgrade the application, and pay upfront licenses, and can move into an OpEx rather than CapEx expenditure model. Will it be less expensive? According to the sales people, “Yes”; but according to the execs reporting to shareholders, No (“We will make more profit on Cloud Business”).

We also need to look at what is happening with the NoSQL and Big data models, every one is running to SQL over xxxxx. SQL has been around way to long for people to dismiss it, there are too many existing BI tools out there that depend on SQL, people know how to write SQL, and even with the advent of R to provide some amazing insights into Big Data, people are comfortable with SQL. Apache Drill (especially the release from MapR), is a great example of this, and for day-to-day analysis it’s a great tool. But as people limit themselves to SQL vs. newer analysis tools like R then the promise of the data Driven paradigm of cloud fades as we become Schema Driven and data bound again.