Deploy your equipment locally and globally

Here Come the .NET Containers

So I was watching Twitter this morning in anticipation of interesting news from TechEd 2014. TechEd isn’t traditionally known as the place where Microsoft drops a lot of big, bold announcements so I wasn’t expecting too much. But then I saw Scott Hanselman post this little nugget:

//platform.twitter.com/widgets.js

Scott’s certainly not prone to baseless hyperbole… so this was interesting. A bit later, he followed up with this:

//platform.twitter.com/widgets.js

Okay, let’s go have a look. Hmm, yes… stuff from the last Build about native compilation, open language compilers, and better JIT… cloud-optimized CLR, that’s interesting… deploy my own CLR and .NET Framework with each app, okay that’s nice I guess WAIT WHAT.

Why in the world would I want to do that? (more on that in a moment)

<shakesHeadVigorously/>

Okay, moving on… VS.NET and IIS and self-host options, yes of course… what’s this? NuGet goo and .csproj are going away in favor of a project.json file?!?! Hellooooo, node and npm! It’s like someone took the blue pill and the red pill and smashed them together to create the purple drank insanity vortex pill.

<shakesHeadVigorously/>

<again/>

<feelingSlightlyWoozy/>

Now it all just starts to run together… haymakers from left and right, I am numb to the madness… local package overrides, compile-as-you-go with Roslyn (in memory, no less) for that frictionless Node-like code-save-refresh dev loop, run anywhere (Mono etc.), dev anywhere, publish the whole schmear to NuGet, and on and on.

<passesOut/>

MS doing a ground-up rev of their flagship web application framework? Yeah, I’d say that’s pretty big.

But go back to the first part… “deploy your own CLR and .NET with each app”. Yeah, okay… but why? I sat there thinking about it for several minutes. Sure, minimize dependency issues, that’s nice enough… still seems like a lot of effort for that. In the real world, some folks have dependency problems, some don’t. That can’t be the real issue.

And then it hit me.

They’re building Docker for .NET.

Well, to be honest, I don’t know that for sure… I have no special insight or “insider info” or secret powers or anything like that. But it sure seems like that’s what they’re up to. It’s what I’d be building if I were them.

So what the heck is Docker, you ask? It’s a virtualized hosting environment for software “containers”. A container is an abstraction for configuring and deploying portable, self-sufficient applications that run virtually anywhere… on your laptop, on a VM in your server farm, on Amazon EC2 or Elastic Beanstalk, etc. Docker applications are OS- and technology-agnostic, and they’re lighter-weight than VMs while still providing isolation and sandboxing features important to DevOps folks. Many industry watchers and people consider Docker to be a very big deal… the future of enterprise development, even.

Of course, Docker is implemented on top of the Linux kernel and isn’t necessarily the optimal host for traditional .NET enterprise apps… which doesn’t mean that MS isn’t targeting Docker specifically, but I expect if anything they’re more interested in the *concept* than the specific implementation.

My (entirely unsubstantiated) guess is that they’re building their own Docker-like infrastructure for hosting and executing .NET apps… on your laptop, on your server, on your local VM, on your Azure VM, and/or “bare metal” on to-be-seen Azure infrastructure. If you can package all your app dependencies like storage, queues, virtualized file systems and specific versions of the CLR and .NET Framework, then you can roll it all up into a “container”, deploy that container to a “host”, and press Go. Nice. Very nice, actually.

But wait there’s more. Here are a few other reasons to think .NET-based app virtualization is (or ought to be) in the works:

Virtualized application containers are the sweet spot between IaaS and PaaS.

Virtual machines are, let’s face it, kind of a lousy deployment substrate for many cloud-based applications. They’re big (usually GBs in size). They need patches and updates and virus software and all kinds of headache-inducing babysitting. In the hands of mere mortals they tend to expose far too much attack surface area (too many open ports, too many unused services left running, etc.). They tend to promote reduced application density for provisioned cloud hardware, which drives up costs.

And truthfully, how many enterprise web apps need OS-level customization and services? Some do, sure… especially legacy apps built without cloud deployment as a priority (or even a thought). These relative dinosaurs will be with us for some time… but increasingly, new apps won’t suffer from these issues. They’ll be cloud-native, built with cloud-first principles in mind. They will not require (or even prefer) explicit awareness of a “traditional” operating system.

VMs never have and never will represent the future and promise of the cloud. They are (mostly) a stopgap, bridge technology meant to help mainstream the adoption of cloud services. IaaS is a “thing” primarily because VMs predated cloud, and therefore represented a safe and known vehicle for developers and IT admins to begin working with cloud resources and deployments. “Oh, I see… I can take my existing VMs and put them up in the cloud, and run them there. I get that.” If memory serves, Azure’s lack of first-class VM support was a limiting factor for early adoption of the platform. Once Azure IaaS came into its own, it helped drive adoption. But such momentum has a shelf life. IaaS relevancy has nowhere to go but down.

Platform-as-a-Service is at the opposite end of the spectrum from IaaS. PaaS defines a “platform” substrate on top of the VM infrastructure; it is against this abstraction that applications are deployed and executed. This eliminates many disadvantages of VMs (fewer tangential headaches like admin, maintenance, infrastructure security, etc.) but at the relative cost of playing within someone else’s sandbox… “you can run your app on our PaaS as long as it uses these versions of these technologies and doesn’t try to do this or this or that”, etc. Azure Web Sites and Worker/Web roles use this model, and it works quite well for the significant subset of apps that fit within. Yet there are still occasional nagging issues with app compatibility or a general inability to customize the host environment sufficient to run Your Very Important Application (recall the several weeks’ time in 2013 when .NET 4.5 was released but unavailable on Azure PaaS, etc.).

And that’s where application containers come in. A container is self-sufficient; it encapsulates all dependencies and configuration needed to execute an application. It is a mini-environment in which your application (and only your application) will run. It combines the best features of IaaS (self-containment, host agility, configurability) with the best of PaaS (lightweight, reduced admin burden, isolation). Containers also address specific shortcomings of IaaS (overly-broad canvas, legacy mental model of server resources and applications) and PaaS (lack of tangible application boundaries, relative inability to customize the host environment).

An app-focused container model is not anchored to legacy notions of server software. It is the future of cloud deployment and execution.

An app container becomes the de facto unit of deployment, isolation, scale, versioning, administration, security, and instrumentation.

All of these are important elements of any cloud-deployed application today; the trouble is that many aspects are fulfilled by a combination of your application and its host environment. This lack of uniformity has consequences in the form of code and configuration complexity and redundancy.

An app container environment provides an ideal abstraction on which to layer these infrastructure services, such that they’re “close enough” to your application code to be useful while remaining orthogonal to that same code. Reality tends to intrude on such best laid plans, but I have high hopes that we’ll see a clean separation of concerns here.

Higher app density in the cloud means better utilization of infrastructure.

As I’ve noted above, an app container infrastructure will tend to yield higher application density over the same cloud resources, relative to an equivalent IaaS deployment over the same resources.

The reason for this is simple enough. In an app container environment, the underlying infrastructure is responsible for mapping containers to available resources. In an IaaS environment this is left as an exercise for the IT admin. All other things being equal, over time an automated, rule-driven infrastructure will optimize resource utilization well in excess of the capabilities of your average (or even above-average) IT admin.

Too, the very nature of IaaS means that VMs will almost never be shared by multiple unaffiliated parties… even when it might be otherwise safe (and cost effective) for them to do so. The aggregate effect over time is to over-provision cloud resources relative to actual need.

On the other hand, the sandboxed isolation of app containers, and their relatively small resource footprint, makes it safe and desirable to pack them as densely as configuration parameters allow (minimum guaranteed CPU slices and RAM utilization will still factor in, etc.). This will tend to maximize resource usage over time, relative to the IaaS model.

App density affects you as a consumer of cloud resources. The higher your app density, the more cost effective you are… you’re getting more bang for your cloud dollar. Lower application density means you’re wasting money.

Dependency hell is finally eliminated further tamed.

Let’s be honest… dependency hell will never be entirely eliminated so long as there is more than one programmer walking this Earth (and even then, probably not). But the self-sufficient nature of an app container deployment and execution model can go a long way to reducing the occurrences of such issues.

NuGet has become the de facto tool for managing dependencies in .NET; occasional versioning issues aside, it tends to work reasonably well. But it’s the announced ASP.NET v.Next “ship the CLR and .NET Framework as a dependency” feature which is likely to have the greatest impact here. This capability closes a major gap in the .NET dependency management story, and its so compelling and significant that I can only imagine the end goal is to enable a container-like deployment and execution model.

A container-aware cloud infrastructure enables a million tiny PaaSes.

PaaS today is still immature, in that there are still relatively few providers and with relatively few offerings. That’s not a criticism… it’s a recent paradigm shift and one that is still trying to gain traction and answer the skeptics. The PaaS market is certainly growing, and should continue to do so… but whether that’s a result of riding the larger cloud wave or due to the intrinsic (and, in my opinion, very real) value proposition of PaaS as a strategy unto itself, isn’t entirely clear. Probably some of both.

But if an app container model became commonplace, I predict a relative explosion in the number of PaaS providers and offerings… because suddenly it becomes easy to define a useful, configured application substrate (“a self-hosted HTTP endpoint environment running on .NET 4.5, pumping ingest messages onto a virtualized service bus”) and expose that for the world to:

1. pull off the shelf (a “registry” in Docker lingo)

2. create an instance of

3. install an application onto

4. deploy to a container host

Take Azure Web Sites as they exist today. Sure, you can deploy .NET, node, PHP, etc. to run there… but that’s because the Azure team made explicit design and implementation choices to enable those things. That may cover a healthy chunk of potential workload, but it certainly doesn’t exhaust the range of possibilities. How much more interesting would it be to allow virtually anyone to offer their own flavor of container-aware PaaS on top of Azure? What kind of market would evolve? What sort of possibilities and application types would emerge? Who knows… but its fun to consider the potential!

Virtualized application containers, done correctly, enable Platform-as-a-Service…-as-a-service. Cue obligatory Inception reference. Smile

Hybrid cloud scenarios actually become interesting.

Confession time. I hate the term “hybrid cloud”. This is unfortunate, because it comes up with all my clients interested in cloud. Every. Single. One.

Hybrid cloud is a term so overloaded as to be meaningless. Does it describe a single cloud-hosted application integrating data located both on-premises and in a public cloud environment? Sure. Does it mean a single application with processing that occurs both in the cloud and in a private data center? It can mean that, yes. Or is it a suite of applications, where the “hybrid” adjective describes the presence of some applications in a public cloud, some in a private cloud, and perhaps some located within an end-user’s firewalled network topology… all tied together either conceptually or even technologically in some fashion. I’ve seen the term used for this, too.

It’s about at this point that my head starts to hurt, just a little.

But here again, an app container model can bring clarity and focus to otherwise muddied waters. The relative agility of a container that can plug-and-play as an autonomous unit, on many possible hosts, would serve as a terrific building block for a hybrid cloud strategy. In a world where Microsoft (or someone) enables app containers to run anywhere… on your laptop, on the server in your closet, in your private data center, in your Azure infrastructure, etc… suddenly you’ve got true choice and flexibility on where to deploy which pieces of your application (or which applications). The notion of a true hybrid cloud application (some parts in the cloud, some parts not) becomes not only feasible but relatively straightforward.

Again, that’s not really the case today… VMs are (as mentioned above) clunky and baggage-laden. They’re poor (if one of the current few) choices if you’re looking to achieve host agility in the pursuit of hybrid cloud. PaaS is conceptually a better choice for a targeted, app-focused hybrid model, but suffers from the real issue that your PaaS layer has to run both in the cloud and on-premises… there are some notable players in this space like Red Hat’s OpenShift and Cloud Foundry (both of which run on OpenStack) as well as the on-premises Azure Pack, but these strike me as imperfect solutions that suffer from the lack of a single, common abstraction on which PaaS can reside both locally and in the cloud. Virtualized application containers could very well be that necessary abstraction.

In the end Microsoft has a vested interest in greasing the skids and providing a true, frictionless “on-ramp to the cloud”. The ability to define and deploy a self-contained application container that does one thing well, and does it virtually anywhere, is a surefire means to drive public cloud adoption and bolster their new “cloud-first” strategy.

It’s an exciting time to be in software.  Smile

Improve
system uptime.

Decrease
resolution times.

Rely on proven
data centers.

Plan a path
to the cloud.

Together we will reduce the potential for business impacting outages by leveraging modern and secure data centers.

Confidently take advantage of a highly trained 24x7x365 team of professionals who can perform remote hands services.

We deliver environments that are SSAE Type II audited by a third party and N+1.

We can help you migrate to a public, private or combination cloud using our deep knowledge and real world experience.

Visit our award winning data centers.

Schedule a Tour

Enterprise-Grade Colocation

Atmosera owns and operates secure data centers for customers that need full-service space, power and cooling.
We offer secure racks, cabinets and cages, and network connectivity from multiple network providers.

  • Data Center

    Address

  • Beaverton, OR

    9610 SW Sunshine Ct

  • Beaverton, OR

    9705 SW Sunshine Ct

  • Data Center

    Address

  • Phoenix, AZ

    615 N 48th St

  • Portland, OR

    511 SW 10th Ave

Reliable, Secure, Audited

All Atmosera operated data centers are SSAE 16 Type II, audited by a third party and offer 24x7x365 secure access with two-factor authentication, and individually keyed cabinets and cages. Atmosera’s Technical Assistance Command Center (TACC) provides around the clock monitoring and remote hands service for customers who need it.

Cloud Migration Path

From colocation to cloud is easier than you might think. Many Atmosera customers have enlisted our help to migrate into the cloud from their existing colocation deployments. Some customers are also maintaining their colocation while deploying new applications in a private or public cloud and running a true hybrid deployment. Regardless of where you are in your cloud journey, Atmosera can help you with your colocation and any cloud migrations to meet your business needs.


Managed Colocation Services

To meet every customer’s requirements, we offer the following:

  • Racks, full cabinets or secure cages
  • Network connectivity from multiple service providers
  • Fully managed network infrastructure
  • Multiple hardened layers of security, reliability and protection
  • Rigorous operations and maintenance procedures
  • 24x7x365 secure access with two-factor authentication, and individually keyed cabinets and cages

24x7x365 Support

Professional services and fully managed deployment options include:

  • Faster remediation via remote hands and ongoing assistance
  • Architecture, design, procurement and installation services
  • 24x7x365 monitoring of services and equipment
  • Managed security
  • Data protection and backup

Other services you may find interesting:

Explore Our Resources

Download Data Sheets, Case Studies and Infographics.
View our Videos featuring demos and customer insights.

View Resources

We deliver solutions that accelerate the value of Azure.

Ready to experience the full power of Microsoft Azure?

Start Today

Blog Home

Stay Connected

Upcoming Events

All Events