Cloud Computing Related News, Media and Press Releases

The Cloud Computing Newswire

Subscribe to The Cloud Computing Newswire: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get The Cloud Computing Newswire: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Cloud Newswire Authors: Kevin Jackson, Simon Hill, Simon Hill, Amit Kumar, Breaking News

Related Topics: Cloud Computing, Cloudonomics Journal, Open Source and Cloud Computing, Cloud Hosting & Service Providers Journal, Cloud Security Journal , Cloud Computing Newswire, Open Cloud Collaboration, Cloud Backup and Recovery Journal, Cloud Data Analytics, Cloud Computing for SMBs

Blog Feed Post

Cloud Computing versus Cloud Data Centers

Isolation of resources in “the cloud” is moving providers toward hosted data centers

Isolation of resources in “the cloud” is moving providers toward hosted data centers and away from shared resource computing. Do we need to go back to the future and re-examine mainframe computing as a better model for isolated applications capable of sharing resources?

James Urquhart in “Enterprise cloud computing coming of age” gives a nice summary of several “private” cloud offerings; that is, isolated and dedicated resources contracted out to enterprises for a fee. James ends his somewhat prosaic discussion of these offerings with a note that this “evolution” is just the beginning of a long process.

imageBut is it really? Is it really an evolution when you appear to moving back toward what we had before? Because the only technological difference between isolated, dedicated resources in the cloud and “outsourced data center” appears to be the way in which the resources are provisioned. In the former they’re mostly virtualized and provisioned on-demand. In the latter those resources are provisioned manually. But the resources and the isolation is the same.

At some point we’ve moved from the definition of cloud computing requiring shared compute resources and moved to one that requires on-demand provisioning and shared floor space, but not much more. We’re creating cloud data centers, not cloud computing, and ironically it may be internal, on-premise enterprise class “clouds” that end up the only place where cloud computing is actually being put to use.


I know, sounds completely unreasonable to take that stance, doesn’t it? But I’m not kidding. If the majority of public cloud deployments are “private”, i.e. isolated networks and resources, then it really isn’t sharing compute resources, is it? But if the majority of private cloud deployments are “public”, i.e. sharing network and compute resources across lines of business, departments, projects, and applications, then it is cloud computing as originally defined (albeit I’m completely ignoring the requirement of ‘access via the Internet’ because I don’t agree with that requirement).

What we’re seeing evolve is cloud data centers, not cloud computing. Organizations want to reduce expenses and they’re seeing that by hosting deploying applications in the cloud they can eliminate capital expenditures (they don’t pay for the hardware up front) and operating expenses (maintenance and management are provided in the cost). But they’re still leery of sharing resources (and risk) with other organizations, and concerned with the security of their data and applications in “the cloud”. So they’re looking to private cloud deployments as a way to address those concerns but still realize the benefits of hosting deploying in an external data center.

Internally, however, organizations can force the sharing of resources between projects, applications, and departments. They can leverage idle resources on existing hardware and thus they can build a cloud computing environment and not just a cloud data center environment.


You’re probably shaking your head and muttering, “That’s a pretty fine fiber you’re splitting there. Does it really matter?”

The answer to that is, of course, no. It doesn’t really matter whether you’re leveraging cloud computing or cloud data centers. It doesn’t matter what you call it as long as, in the end, it achieves the end result you’re looking for: the secure delivery of fast applications in the most cost and resource efficient manner possible.

And maybe James (and others like Hoff) who’ve patiently explained that we’re a slow evolutionary train toward cloud computing are right. The deployment models – and mindset of IT – today just aren’t compatible with a pure cloud computing model. We have to work slowly toward that goal, taking smaller steps in between. Virtualization and private cloud data centers are just a waypoint along the way to a much more granular method of computing that may very well require changes in the very fabric of that which underlies all applications: the operating system.

The operating systems we run today were not designed with the thought that CPU resources might be located elsewhere physically. So perhaps what’s needed for pure cloud computing – as it was originally thought of – is a major change in the way operating systems leverage bare metal resources.

Would that make a difference? One of the biggest concerns driving organizations toward private virtual cloud is security concerns regarding public cloud. If the core application is isolated but individual pieces of application logic, i.e. blocks of machine-level code, are executed using banks of CPUs and memory, would that be closer or more secure? Is it possible that through a change in the way operating systems leverage compute resources (including those available on other hardware in some kind of collaborative, distributed operating system grid) we can have our cake and eat it too?

And when we look at that model some of us surely shake our heads, remembering mainframes, and think “We’re going back to the future.”

Maybe that whole mainframe model wasn’t such a bad idea after all.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.