Modern Web Development 101

Maybe you’ve heard the term “Modern Web” before. It embodies the idea of a constantly evolving, exciting platform that unique and powerful user experiences can be built upon. It is a platform that new capabilities are added to regularly, not waiting for long release cycles. And maybe you’ve been asked to build a “modern web…

Cmder: Making the Command Line Your Best Friend

Now that I’ve jumped fully on board the Git and GitHub bandwagon, I’m spending a lot more time at the command line.  In fact, I find myself working with both Visual Studio and the command prompt simultaneously, constantly switching back and forth between the two.  The main reason is that, while Visual Studio 2013 makes…

Why You Should Be Writing ECMA Script 6 Now

If you are reading this article then you have probably at least heard of ECMA Script 6, the next version of JavaScript, and are curious about what it means for you or your organization. You might be wondering how ES6 is different, if it will affect your existing applications, or if it will affect your…

When Should You Make the Move to Entity Framework 7?

The next version of Entity Framework will be called “Version 7” and will be released as part of the next version of ASP.NET, called “ASP.NET 5.” If you’re currently on EF6, you might jump to the conclusion that you should upgrade to EF7 as soon as it hits the streets. But the EF team has…

Less Defects with F# Units of Measure

One of the most powerful things that F# has is its ability to create custom units of measure. F# allows you to set an explicit type to a unit. That’s right…you will get compile time checking on any of the units that you use within your app...

Single Stepping a PowerShell Pipeline

As I was building up a moderately complicated pipeline in PowerShell, I was having some trouble and really wished there was a way to single step the pipeline so I could see the state of each item as it was processed. I wasn’t sure this was possible, but it is and I thought others might find it helpful. The magic is in the super powerful Set-PSBreakpoint cmdlet.

Because Set-PSBreakpoint can create breakpoints on not only lines and variable reads/writes, but also commands, I thought I’d try setting a breakpoint in the PowerShell console on one of the commands in the pipeline and it worked great! In the example below I will set a breakpoint on the Get-ChildItem cmdlet and when the breakpoint is hit, use the command line debugger to step through parts of the pipeline (the ‘s’ command is step into)

  1. PS C:Junk> Set-PSBreakpoint -Command Get-ChildItem
  2.  
  3.   ID Script   Line Command        Variable    Action
  4.   -- ------   ---- -------        --------    ------
  5.    0               Get-ChildItem
  6.  
  7.  
  8. PS C:Junk> Get-ChildItem *.exe | ForEach-Object { $_.Name }
  9. Entering debug mode. Use h or ? for help.
  10.  
  11. Hit Command breakpoint on 'Get-ChildItem'
  12.  
  13. At line:1 char:1
  14. + Get-ChildItem *.exe | ForEach-Object { $_.Name }
  15. + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  16. [DBG]: PS C:Junk>> s
  17. At line:1 char:38
  18. + Get-ChildItem *.exe | ForEach-Object { $_.Name }
  19. +                                      ~
  20. [DBG]: PS C:Junk>> s
  21. At line:1 char:40
  22. + Get-ChildItem *.exe | ForEach-Object { $_.Name }
  23. +                                        ~~~~~~~
  24. [DBG]: PS C:Junk>> s
  25. Foo.exe
  26. At line:1 char:48
  27. + Get-ChildItem *.exe | ForEach-Object { $_.Name }
  28. +                                                ~
  29. [DBG]: PS C:Junk>>


I love how the tildes show you exactly where you are in the pipeline. That makes it super easy to debug the pipeline. A little hint when single stepping pipelines is if you are setting a command breakpoint on something like Get-ChildItem, you’re going to hit it all the time. What I found works best is writing the commands in the ISE and pasting them into the console window where you want to debug.

For those of you that use the ISE most of the time, you have one problem. Here’s the same commands run in the ISE.

  1. PS C:Junk> Set-PSBreakpoint -Command Get-ChildItem
  2.  
  3.   ID Script   Line Command        Variable    Action
  4.   -- ------   ---- -------        --------    ------
  5.    0               Get-ChildItem
  6.  
  7.  
  8. PS C:junk> Get-ChildItem *.exe | ForEach-Object { $_.Name }
  9. Hit Command breakpoint on 'Get-ChildItem'
  10. Stopped at: Get-ChildItem *.exe | ForEach-Object { $_.Name }
  11. [DBG]: PS C:junk>> s
  12.  
  13. Stopped at: Get-ChildItem *.exe | ForEach-Object { $_.Name }
  14. [DBG]: PS C:junk>> s
  15.  
  16. Stopped at: Get-ChildItem *.exe | ForEach-Object { $_.Name }
  17. [DBG]: PS C:junk>>


For whatever reason, the ISE does not show the tildes where you are in the pipeline. Fortunately, it’s not hard to find where you are because the $PSDebugContext variable’s InvocationInfo field has all the information about everything going on at that point in the debugger.

  1. [DBG]: PS C:junk>> $PSDebugContext.InvocationInfo
  2.  
  3.  
  4. MyCommand             :
  5. BoundParameters       : {}
  6. UnboundArguments      : {}
  7. ScriptLineNumber      : 1
  8. OffsetInLine          : 40
  9. HistoryId             : 3
  10. ScriptName            :
  11. Line                  : Get-ChildItem *.exe | ForEach-Object { $_.Name }
  12. PositionMessage       : At line:1 char:40
  13.                         + Get-ChildItem *.exe | ForEach-Object { $_.Name }
  14.                         +                                        ~~~~~~~
  15. PSScriptRoot          :
  16. PSCommandPath         :
  17. InvocationName        :
  18. PipelineLength        : 0
  19. PipelinePosition      : 0
  20. ExpectingInput        : False
  21. CommandOrigin         : Internal
  22. DisplayScriptPosition :


As you can see, the PositionMessage is the important data. In my ISE profile, I added the following function to make it easy to see where I’m at when single stepping a pipeline.

  1. function cpo
  2. {
  3.     if (test-path variable:/PSDebugContext)
  4.     {
  5.         $PSDebugContext.InvocationInfo.PositionMessage
  6.     }
  7. }


Single stepping the pipeline isn’t super ninja magic like some of those PowerShell people do but this sure helped me figure out what was going on in my troublesome pipeline.

Seize the Opportunity of the Internet of Things

There are a lot of newcomers to the Internet of Things (IoT) and Machine to Machine (M2M) space lately. Many of them love to speak authoritatively and often use vending machines as their favorite example use case to illustrate the value of IoT. When you see me use vending machines in a similar fashion, it’s…

My Introduction To Data Science With F#

You may have seen a big boom lately with web technologies, especially with all of the JavaScript frameworks coming out at a pace of around one a week. But there is another boom you may have heard about…data science. Now I’ll be the first t...

Wintellect.Analyzers: Five New Roslyn Diagnostic Analyzers and Code Fixes

When I first saw the Roslyn compiler, I was thrilled! For once the compiler was not going to be a black hole where source code comes in and on the other side of the worm hole a binary comes out. With open extensibility there’s going to be some amazing tools developed that were impossible to do any other way. With this week’s release of the Visual Studio 2015 Preview, the Roslyn API is solid and ready for extension!

We have posted on Wintellect’s GitHub account our first five analyzers and code fixers we wrote to explore that part of the API. They should give you a good idea how to get started writing your own new rules. Here’s the initial set of rules:

AvoidCallingMethodsWithParamArgs
This informational level gives you a hint that you are calling a method using param arrays. Because calls to these methods cause memory allocations you should know where these are happening.

AvoidPredefinedTypes
The predefined types, such as int, should not be used. When defining a type the built in types can be different sizes on different versions of .NET (e.g., desktop and Internet of Things) so you want to be as explicit about types as possible.

CallAssertMethodsWithMessageParameter
Calling the one parameter overload of Debug.Assert is a bad idea because they will not show you the expression you are asserting on. This analyzer will find those calls and the code fix will take the asserting expression and convert it into a string as the second parameter to the two parameter overload of Debug.Assert.

IfAndElseMustHaveBraces
If and else statements without braces are reasons for being fired. This analyzer and code fix will help you keep your job. :) The idea for this analyzer was shown by Kevin Pilch-Bisson in his awesome TechEd talk on Roslyn. I just finished it off.

ReturningTaskRequiresAsync
If you are returning a Task or Task<T> from a method, that method name must end in Async.

There’s no real secret to writing rules. The Roslyn API is extremely well done and once you start getting a feel for it, you can quickly see how things fit together. The best thing you can do is just spend time looking at the Roslyn Syntax Visualizer to see how syntax and tokens fit together. I personally found right clicking on the expression and looking at the Directed Syntax graph was super helpful.

Feel free to add your rules to our repository. I’ve got a couple of more I’m working and so do other folks at Wintellect so subscribe so star the repo to get notifications of updates.

Using the F# Interactive

Another item that was on the F# Infographic was the use of the F# Interactive or also called the FSI. The interactive window is basically a REPL for F#. A few languages have these of their own, including Python and Ruby. If you’ve used the Interm...

F# Community Projects

Many of you may have certainly heard about F#, and as you may have seen from the previous F# Infographic, it definitely has one awesome community. I'd like to expand a bit on the community to highlight what they have all accomplished. While the F# Soft...

Migrating Your Enterprise App (And It’s Relational Model) to the Cloud – A Pragmatic Approach

In my last post I discussed the pervasive issue of relational modeling as a (poor) substitute for proper domain modeling, the reasons why (almost) everyone continues to build software that way, and the resulting problems that arise for those with ambitions to move their relationally-modeled software system to the cloud.

Given the prevalence of RDBMS-backed enterprise software and the accelerating pace of public cloud adoption, chances are pretty good you’re faced with this scenario today. Let’s discuss what you can do about it.


Scenario: We Built Our App Against A Relational Model… But Now We Want It In The Cloud!

My first piece of advice is make sure you really need public cloud. The cloud has well-documented scalability, elasticity, and agility benefits, but most of those arise only for software designed intentionally to take advantage of them. They also tend to prove most cost-effective when amortized across a relatively long application life span, or across a relatively large (and, ideally, increasing) number of transaction requests. If yours is a legacy app with modest and relatively static resource needs, the cost justification for a public cloud deployment may not be so obvious.

It may, in the final analysis, not exist at all. Sure, you can leverage IaaS capabilities to spin up VMs and perhaps buy yourself a modest additional level of deployment flexibility. For some, that might be a reasonable and justifiable decision; perhaps capital expense constraints preclude reinvesting in updated hardware for the private data center. But let’s face it, a “modest improvement” in deployment flexibility is hardly the stuff of IT legend.

Instead of diving blindly into the cloud and/or immediately devoting resources toward a cloud-focused refactoring effort, first consider your reasons for wanting to adopt public cloud. If your application performs poorly on existing private data center hardware, the cloud isn’t pixie dust to wave over it and make it better. In fact, for some legacy codebases cloud hosting could degrade performance even further! You must understand what you’re dealing with; find a trusted software architect to assess existing issues with your application, determine root causes, and recommend mitigation strategies. Know where your big resource-hogging transactions are. Understand which tables in your current relational database are hotspots, and what you can do to fix that. Understand what happens front-to-back-to-front as users click through your web UI and wait several seconds or more for page navigation, data refresh, etc. In many cases, additional (if non-trivial) refactoring might be enough to squeeze better performance from your existing infrastructure. If yours is a multi-tenant architecture, consider sharding your database layer by customer to better accommodate heavy load. This may also put you in a better position to eventually migrate to the cloud at a more appropriate time.

Perhaps your legacy app runs well enough for current user demands, but you anticipate additional capacity needs in the near future. This isn’t a bad reason to consider public cloud; but are you sure your app will scale well horizontally? Recall that scale-out (horizontal scaling) is far and away the preferred scalability mechanism in the cloud; scale-up (vertical scaling) is also possible but tends to be very expensive in the cloud and is only a good solution in relatively narrow cases. If scale-up is your preferred (or only?) strategy, it’s entirely possible you can achieve your goals more cheaply by migrating to bigger iron in your private data center than by paying hourly costs for super-sized cloud VMs. Maybe… maybe not. Again, you may need to find a trusted resource to help you do TCO analysis of cloud and private data center options. Be informed. Don’t make knee-jerk decisions, in either direction.

So far it sounds as though I’m recommending against legacy app migration to the cloud. Far from it… done correctly and for the right reasons it can easily justify the needed time and resource commitments. If your due diligence indicates that a move to the cloud makes sense for your legacy application, your next step is to determine whether the IaaS or PaaS hosting model makes sense for you (I’ll skip over the choice of public cloud provider, for now anyway).

We can keep this short: you’re almost certainly going to want to start with IaaS virtual machines. PaaS hosting is a great model for applications intentionally built to leverage the cloud and comfortable with the constraints imposed by the PaaS provider of choice (usually things like choice of technology stack, deployment options, etc). PaaS can have a superior TCO story relative to the IaaS hosting model. But as you’re starting with a legacy application that wasn’t built for the cloud, PaaS may not be a deploy-and-forget scenario for you. To leverage PaaS you’ll need a horizontally scalable application that minimizes dependencies on custom OS configuration and file system access (custom Windows registry keys and reliance on specific or deep file system hierarchies are all red flags). You’ll need to understand the configuration, monitoring, and authentication/authorization subsystems of your PaaS provider and ensure your software plays nicely with those. You’ll also need to enable access from your PaaS-hosted app to your database; many PaaS solutions provide such behavior but only for a limited subset of technologies and configurations. Non-standard setups either require manual configuration or might simply be disallowed. In summary, a PaaS-hosted application is a cloud-ready application; getting your legacy app to that state is going to involve some effort. Thus IaaS will usually be a better starting point for existing apps.

Of course, VM hosting in the cloud can help you more easily scale up your existing relational database to the limits of your data size and query patterns (and budget Smile) and help you correspondingly scale up/out your application tier. To the extent this gives you more flexibility than you would have otherwise had in your private data center, this is a good thing. But if your resource needs continue to grow, you’ll eventually end up needing to refactor your database and your code to scale horizontally. Where to start?

Here’s where proper domain modeling is needed. There are a number of techniques for this; my personal favorite is Domain-Driven Design. Set aside your existing software for the moment and build a conceptual model of the business problem its solving... something a user of the system would understand. Know that this is no easy task; prepare to spend time and money to do it right. But the overarching goal is two-fold. First, develop this conceptual picture of your business problem independent of any storage-centric model used to represent data. This will give you technology-agnostic building blocks through which you can sort through challenging aspects of the system. Second, subdivide that large conceptual model into smaller, easier-to-digest sub-models that describe discrete processes within the larger problem domain. In DDD these sub-models are known as bounded contexts; they provide ambient, point-in-time meaning for actions and state changes within the system (you’re not just adding line items to a purchase order, you’re building the user’s shopping cart and preparing it for processing) and they’re bounded by a primary reason for existence (sort of a modeler’s take on the Single Responsibility Principle). A single conceptual model will consist of many such bounded contexts.

Bounded contexts become the unit of transactional consistency within a larger system; that is, state changes within a bounded context should be encapsulated within a single database transaction. Thus the use of bounded contexts tends to largely eliminate a big problem of monolithic relationally-modeled applications: the tendency for state changes to cascade across large swaths of the relational model, which results in very large (and slow!) transactions. Of course, the need will arise to propagate changes from one context to another; in DDD this often occurs as a formally modeled state change event (“item X added to shopping cart”) which can be handled by any other context which previously registered interest in such an occurrence. The handling of these events occurs beyond the scope of the initiating transaction, however. For more information on how this works, read up on eventual consistency (or drop me a line!).

The use of bounded contexts has an interesting side effect for cloud-based systems. By subdividing your application model into smaller, self-consistent sub-models with small transactions, you make it possible to deploy storage independently for each context. In other words, you get horizontal scale of your database as a fundamental part of your design, which is the fundamental building block of a cloud-native application.

In practice, multiple bounded contexts will store their state on the same machine (even the same database instance); but there’s nothing stopping you from using multiple database machines or even multiple database products across multiple machines, if the need arises from a scalability perspective. In fact, that’s precisely how cloud-hosted data stores scale to handle request levels far beyond what was considered “big” even five years ago. In the cloud, you don’t need (or even want) One Big Database.

So back to your existing application. Once you’ve done some domain modeling and subdivided that into manageable sub-models, consider the work you did to identify performance bottlenecks and database hotspots, and map that list of issues to the sub-models you created. Granted the lines won’t always be neat and clean, but the goal is to prioritize sub-models that you can migrate from the existing relational model to their own self-encapsulated ones. Start with the highest priority items and work your way down the list. With time and patience you can convert that legacy, monolithic enterprise application into a horizontally-scalable, cloud-ready fire breather. Just realize that there is no silver bullet here; application migration to the cloud is a process that rewards steady persistence above all else.

A minor disclaimer: clearly I’m glossing over the details of this process, and focusing principally on the database layer. This is intentional, but it does omit other interesting and relevant topics like security, integration, deployment mechanics, legal and economic considerations of the cloud, etc. I haven’t even scratched the surface of all DDD has to offer. Those are fun discussions too Smile. Ping me if you’d like to talk specifics of your situation.

Next time, I’ll talk about ways to avoid this whole mess if you’re writing a new cloud-targeted enterprise app… we have the technology!