A Method for Service Oriented Architecture (SOA)

When you adopt service oriented architecture (SOA), the most important aspect of your architecture and high-level design step when building a new system is obviously decomposition of the system into the right services. A prudent way to decompose a system into services is to first identity what parts of the system is more likely to change more frequently. Thus, you decompose by volatility and set up dependencies such that you always have more volatile services calling less volatile services. Within the same level of volatility, of course, you would further decompose services by function if needed.

This, it is said, makes for a much more maintainable system that is much more adaptable to business requirement changes – which are obviously the norm in our industry. There is a school of thought, however, that goes one level beyond this. It classifies a service as one of the following four types (in increasing order of volatility): utilities, accessors/persistors, computational services and use case handlers.

A utility service is something that deals solely with cross cutting concerns – throughout the system (such as security, logging, configuration), or throughout a given domain (e.g. persisting of documents in a larger system of which documents are a part). Utilities are mostly purely technical and can be called by all services.

An accessor/persistor is the service that sits closest to the data at hand – be it some external resource, or as is more likely to be the case, the database. These services deal primarily with retrieval and storage, but also logical grouping of the data in a manner that makes sense to the rest of the application (think aggregates in DDD terms), data-level validation, redaction for security, etc. Since these services are closest to the metal (that being the database), and we don’t want volatility to sneak in there, these need to be the least volatile services apart from utilities. These are the only services that can talk to data sources, and can be called by use case handlers and computational services. Utilities cannot call them. Other accessors/persistors cannot call them either.

A computational service will have hardcore computational logic that can change more often than, say, your database schema, but less often than specific use cases. They can call utilities and accessors/persistors; but usually it is advised that they be purely computational – in that they get the data they need to work with handed to them, they do their thing, and then they return their output. Only use case handlers can call them.

A use case handler is most volatile and executes business use cases by mostly orchestrating calls to less volatile services. They can call utilities, computational services and accessors/persistors as needed. They also handle operation level authentication and authorization if applicable. They are the external interface to the “service platform” in that all consumers, be it the UI or external systems – can only talk to use case handlers. Less volatile services are thus shielded. What if a use case handler needs to call another use case handler? Within a contiguous subsystem, that should not happen. If it happens, that is a design smell. That could become necessary to facilitate communication between two subsystems – in which case, it is a better idea to do it through queues, messages and subscriptions (think bounded contexts in DDD terms).

There are a few more guidelines to top these off:

  • Name your services so their role in the above taxonomy becomes clear. Instead of just “FooService”, think “FooUtility”, or “FooRepository” for accessors/persistors, “FooCalculator” or “FooEngine” for computational services, and “FooHandler” or “FooManager” for use case handlers.
  • As you are performing detailed design, keep an eye on the number of operations in a service. If it is starting to feel like a lot, perhaps it is time to decompose a bit further.
  • Treat each service as “the” unit of development for your system. This means – each service is a self-contained unit complete with tests and such. This also means that while you want to stay DRY within a service within reasonable limits, it may not be the best idea to strive for perfect DRYness across services.
  • Each operation should validate its requests and handle authentication and authorization if needed in a self-contained manner.

All of this also aligns pretty well with SOLID design principles. Now, all of this sounds great on paper; however, there are a few pain points to consider. If you strictly adhere to these restrictions, you will end up creating a lot more services than if you didn’t. For example, there are a lot of cases where the use case simply is to go get the data and go store the data. In such situations, you are forced to create a use case handler that is largely pass-through to an accessor/persistor. You could relax these rules in such situations, but then you have a dependency from your consumer to your accessor/persistor. If, at some time, the use case evolves, you need to make sure an use case handler is inserted at that point rather than the so-called “non-volatile” accessor/persistor being modified.

The other obvious pain point with this approach is code bloat. More services means more code, and that means more code to maintain. I think when you get to a system of a certain size, that volume of code becomes justifiable. So, there is a system size below which a lot of this is too much overhead. It is wise to identify that point and react accordingly. Perhaps better tooling could help, too – something tailored for building systems this way.

One problem I have with this – and in fact with SOA in general is that your system is made up of all these services that have logic distributed within them. If you decomposed by volatility and then by function, then granted – your logic is distributed properly. There still is no identiable “core” of the system where the “meat” is – so to speak. That is something that DDD addresses in my opinion. Hence my increasing interest in meshing DDD and SOA. More on that later.

Getting on the Domain Driven Design Bandwagon

Domain driven design has been around for quite a while. I believe the definitive book on it by Eric Evans came out first in 2004. For whatever reason, I had not been exposed to it in places I worked. I had been hearing about it for enough time and from enough smart people to give it a try. I researched it online a bit and went through quite a few articles. Especially, the set of articles on DDD by Jimmy Bogard (Los Techies) was quite helpful. Finally, I ended up buying Evans’ book and reading it cover to cover.

I liked what I saw. The whole idea behind keeping your domain logic encapsulated within your domain objects appealed to me. There were questions, obviously, but I figured it was worth trying out. So that is what I am deep into currently. The idea of entities, value objects, aggregates and aggregate roots makes sense, but at the same time, also raises questions – especially with regards to database performance. I am hoping I will arrive at satisfactory answers.

As things get more complex, other concepts such as bounded contexts and domain events enter the picture. While I get them in theory, my idea for now is to stay away from actually getting hands-on with those until I have a good handle on “vanilla” DDD. Another question I have is how this meshes with SOA – whether the two are complimentary or exclusive. I would hate to have to give up SOA to stick with DDD. In any case, it feels exciting – and I can’t believe it has been around for so many years and I never got into it.

For anyone getting into DDD, I strongly recommend reading Evans’ book. In software timescale, it was written aeons ago (when Java was the state-of-the-art, mind you). But all of it still applies, and if you’re working with something like C#, as I am, things become even easier since you have so much more power with these modern languages.

So, for the moment, let’s say I am on the bandwagon. Hopefully I don’t have to get off.

An Easy Service Proxy Executor for WCF

If you have adopted service oriented architecture (SOA) and are using WCF as the hosting/communication mechanism for your internal services, chances are you are doing one of two things: you publish each service like any old WCF service and your other services which are consumers of said published service consume it through its WSDL; or you create shared libraries that include the contract information that both the service and its consumer reference. Both are somewhat cumbersome but can be managed. If all your services are internal, though, going the WSDL route is somewhat of an unnecessary overhead and is just a bit more unmanageable.

Now, if you decide to go the second route, but still stick to the more obvious interface that WCF provides to instantiate and call proxies (ClientBase and the like), that is a bit of a waste – since those classes were built with generated-code proxies in mind. In that case, the better option really is to have a mechanism to obtain a ServiceEndpoint and use that along with the contract information to create your own ChannelFactory – where you can then call CreateChannel to get your proxy. A lot less code and a lot more manageable.

To this end, for my own purposes, I built a bunch of classes that comprise my WCF service executor module. This is part of the Services namespace in the new Commons Library. Here are what a few of the key classes look like – you should be able to surmise how they can be used. The most common usage example would be:

var response = ServiceCallerFactory
   .Create<IMyContract>()
   .Call(x => x.MyOperation(request));

 

IServiceCaller

public interface IServiceCaller<out TChannel>
{
     void Call(Action<TChannel> action);
     TResult Call<TResult>(Func<TChannel, TResult> action);
}

ServiceCaller

public class ServiceCaller<TChannel> : IServiceCaller<TChannel>
{
      private readonly ServiceEndpoint endpoint;
      private readonly EndpointAddress endpointAddress;

      public ServiceCaller() {}

      public ServiceCaller(ServiceEndpoint endpoint)
       {
             this.endpoint = endpoint;
       }

      public ServiceCaller(EndpointAddress endpointAddress)
      {
             this.endpointAddress = endpointAddress;
      }

      public void Call(Action<TChannel> action)
      {
             var channelFactory = this.endpoint != null
                   ? new ChannelFactory<TChannel>(this.endpoint)
                   : new ChannelFactory<TChannel>();

            if (this.endpointAddress != null) channelFactory.Endpoint.Address = endpointAddress;

            var channel = channelFactory.CreateChannel();
             try
             {
                   action(channel);
             }
             catch
             {
                   channelFactory.Abort();
                   throw;
             }
             finally
             {
                   channelFactory.Close();
             }
       }

      public TResult Call<TResult>(Func<TChannel, TResult> action)
      {
             var channelFactory = this.endpoint != null
                   ? new ChannelFactory<TChannel>(this.endpoint)
                   : new ChannelFactory<TChannel>();

            var channel = channelFactory.CreateChannel();
             try
             {
                   return action(channel);
             }
             catch
             {
                   channelFactory.Abort();
                   throw;
             }
             finally
             {
                   channelFactory.Close();
             }
       }
}

ServiceCallerFactory

public static class ServiceCallerFactory
{
       private static readonly object serviceCallerMapLock = new object();

      private static readonly IDictionary<Type, ServiceCaller> serviceCallerMap = new Dictionary<Type, ServiceCaller>();

      public static Func<Type, ServiceEndpoint> ServiceEndpointAccessor { get; set; }

      public static IServiceCaller<TChannel> Create<TChannel>(EndpointAddress endpointAddress = null)
      {
             ServiceCaller caller;
             if (serviceCallerMap.TryGetValue(typeof (TChannel), out caller))
                   return (IServiceCaller<TChannel>) caller;

            lock (serviceCallerMapLock)
            {
                   if (serviceCallerMap.TryGetValue(typeof (TChannel), out caller))
                         return (IServiceCaller<TChannel>) caller;

                   if (ServiceEndpointAccessor != null)
                   {
                         var serviceEndpoint = ServiceEndpointAccessor(typeof (TChannel));
                         if (endpointAddress != null) serviceEndpoint.Address = endpointAddress;
                         caller = new ServiceCaller<TChannel>(serviceEndpoint);
                   }
                   else
                   {
                         caller = endpointAddress == null
                               ? new ServiceCaller<TChannel>()
                               : new ServiceCaller<TChannel>(endpointAddress); 
                   }

                  serviceCallerMap[typeof (TChannel)] = caller;
            }

            return (IServiceCaller<TChannel>) caller;
      }
}

Bye, Bye, TypeScript, for now

As much as I raved about TypeScript in this post from some time ago, sadly the time has come for me to part with it – at least for now. It is a beautiful piece of work by a beyond-brilliant group of people. As I worked more and more with JavaScript the past year, though, I realized a few things.

The first, and this I already mentioned in my previous post, is that it is still maturing and is not quite there yet. One of my pain points was the lack of object initializers that, in my opinion, took away some of the expressiveness of JavaScript. However, as I now look at it, it is the whole idea of trying to hide the fact that everything in JavaScript is a hash-map. Thus, you can and should be able to create an object or assign an object on the fly using JSON notation. As soon as you introduce TypeScript annotations into the mix, this goes away. The best of both worlds here would be if I could have it annotated and still be able to assign or initialize using JSON (and have the JSON be validated based on the annotation).

The other side to that equation is just the ability to pass tuples around like primitive variables. That is what you get with JSON objects – and while it looks unstructured when looked at from the lens of a more strict language, it is in fact a feature by design. Similar reasoning can be applied to the whole idea of what functions are in JavaScript, how they can define scope, how they can nest, and so on. I am not sure how much having the syntactic sugar of modules and classes helps in that regard.

Of course, TypeScript is a superset – so you can choose to use TypeScript where you wish and have vanilla JS in other places – but then you end up with this asymmetrical mess that still needs to go through the TypeScript compiler before it can work. I do not like asymmetry.

The second reason is something I find hard to articulate, but have experienced nonetheless. I think TypeScript gels fine with Angular, but as soon as you start to use certain prominent frameworks like RequireJS or Jasmine, it starts to get in the way somewhat. Regardless, though, having to go look for “d.ts” files every time you want to use a library is a pain.

The third reason is more of an invalidation of one of the merits of TypeScript as I initially saw – and that was tooling support. At first glance I was quite impressed by the IntelliSense and what not TypeScript brought to my humble Visual Studio JavaScript editor. However, the tooling since then has gotten a lot better in major IDEs just for vanilla JavaScript – up to a point I feel that it is not worth putting up with the overhead of having an add-on running.

Of course, even as I write this, my opinions are based on an early version that I have been using. I know it is progressing rapidly, so it may become a viable option at one point. Since ES6 is moving in a direction similar to TypeScript, however, I believe most libraries will adapt to the new specifications from ECMA – thus rendering TypeScript unnecessary. In any case, at least for now, I have decided to bid farewell to TypeScript.

Bootstrap Modal with AngularJS

We’ll look at a relatively low hanging fruit in case you’re working with vanilla AngularJS and Twitter Bootstrap and are not relying on other add-ons such as AngularUI’s Bootstrap extension. One common need I have is to be able to show or hide Bootstrap modals based on a property on my view-model. Here’s a simplified view of the controller:

var app = angular.module('app', ...);
...

app.controller('ctrl', function ($scope, ...) {
    ...
    $scope.showModal = false;
    ...
});

And here is the HTML:

<a href="#myModal" data-toggle="modal">Show Modal</a>
...
...
<div id="myModal" data-backdrop="static">
    <div>
        Modal text goes here.
        <br/>
        <button data-dismiss="modal">Close</button>
    </div>
</div>

In order to maintain separation of concerns, I want to be able to show or hide the modal as the value of showModal changes. This is another good use for directives in AngularJS. As with the datepicker example, we need a directive that will add a watch on link and use the JavaScript methods available with Bootstrap to control the modal, rather than the data-toggle or data-dismiss attributes.

The directive would then look like:

app.directive('akModal', function() {
    return {
        restrict: 'A',
        link: function(scope, element, attrs) {
            scope.$watch(attrs.akModal, function(value) {
                if (value) element.modal('show');
                else element.modal('hide');
            });
        }
    };
});

Here, we are calling the Boostrap method modal on the element to which the directive is applied, i.e. the div that is the modal container. The HTML modified to work with this directive then looks like:

<a href="#" ng-click="showModal = true">Show Modal</a>
...
...
<div ak-modal="showModal" data-backdrop="static">
    <div>
        Modal text goes here.
        <br/>
        <button ng-click="showModal = false">Close</button>
    </div>
</div>

The modal display is now bound to showModal. Note how we got rid of data-toggle (along with the id on the div) and data-dismiss. Now, if some property on the view-model needs to control whether the modal is displayed, then it would not make sense to have a link hardwired to trigger the specific modal anyway. The case for data-dismiss is different though.

Another thing to consider is – if you have a lot of modals and a lot of different view-model properties controlling them, you are going to have a lot of watches, which you probably don’t want. If we make the assumption that mostly you’re going to have one modal visible at a time (unless you have multiple levels of modals going on – in which case personally I think you would need to rethink the UX you are providing), you can make something more generic such as a modalService that will work with a single modal div and have a showModal operation that takes the content to display in the modal. There would need to be a corresponding hideModal operation as well, of course. I plan to explore this further.

Now, back to the data-dismiss thing. What we have at the moment is somewhat of a one-way binding. It would be ideal if this could be made two-way so that closing the modal using data-dismiss automatically set showModal to false. At the moment, I have not given this enough effort to be able to do it in an acceptably performant way. If someone has, I would love to hear about it.

Writing your own LINQ provider, part 4

This is the last in a short series of posts on writing your own LINQ provider. A quick outline of the series:

  1. A primer
  2. Provider basics
  3. A simple, pointless solution
  4. A tiny ORM of our own (this post)

A tiny ORM of our own

In the previous post, we took a look at a simple, albeit pointless example of a LINQ provider. We wrap the series up this time by looking at something a little less pointless – a LINQ-based ORM, albeit a very rudimentary one. As with the previous one, it helps to take a look at the source code first:

This is a very simple example implementation and has its limitations. It only works with SQL Server. It only supports reads. It only supports these methods:

  • Any
  • Count
  • First
  • FirstOrDefault
  • Select
  • Single
  • SingleOrDefault
  • Where
  • OrderBy
  • OrderByDescending
  • ThenBy
  • ThenByDescending

There are a few more limitations, but again the point is not to redo NHibernate or Entity Framework. There is also a simple fluent mapping interface that you can use as so:

Mapper.For<MyThing>("my_thing_tbl")
    .Member(x => x.Id, "id")
    .Member(x => x.Name "thing_name")
    .Member(x => x.Date "thing_date");

Once you’ve got your mappings in place, it is up to you to create the DB connection. Once you’ve done that, you can create an IQueryable<T> out of aSqlConnection instance and then do LINQ on top of it.

using (var conn = new SqlConnection("..."))
{
    conn.Profile(Console.WriteLine); // Write generated query to console.
    conn.Open();

    var query = conn.Query<MyThing>();

    var things = query
        .Where(x => x.Id < 1000)
        .OrderBy(x => x.Name)
        .Select(x => new {x.Id, x.Name, x.Date})
        .ToArray();
}

Or, using the “other” syntax:

var things = (from item in query
                where item.Id < 1000
                orderby item.Name
                select new { item.Id, item.Name, item.Date }).ToArray();

If you recall Step 2 from provider basics, there were two options. The last solution used Option 1, i.e. there is one queryable that just builds up the expression and the enumerator does the parsing. For this one, we’re using Option 2, where we have a separate implementation of IQueryable<T> for each type of query operation to support.

When you first obtain a queryable, you get an instance of TableQueryable (which is descended from SqlQueryable) corresponding to the table that the type is mapped to. Each call on top of it then wraps the queryable in another descedant of SqlQueryable (e.g. WhereQueryable for Where operations, and so on). This logic is in SqlQueryProvider. Similarly, for executable methods, the appropriate type of ExecutableBase is created and called. Beyond this, the actual query creation logic is implemented in the individual queryables and executables defined within the Constructs namespace.

The queryables and executables work with classes within the QueryModel namespace that represent parts of a SQL query. Based on what operation they stand for, they convert an Expression into a Query, which can then be converted into a SQL string. Each queryable implements a Parse method that does this, and as part of doing this, it parses the queryable it wraps and then works on top of the Query object returned by the inner Parse, and so on until the top-most enumerable gives you the final query. The innermost or leaf queryable is always TableQueryable which simply adds the name of the source table to the query model.

LINQ is undoubtedly awesome, but knowing how it works gives you new appreciation for just how powerful it can be. Man, I love LINQ.

Writing your own LINQ provider, part 3

This is the third in a short series of posts on writing your own LINQ provider. A quick outline of the series:

  1. A primer
  2. Provider basics
  3. A simple, pointless solution (this post)
  4. A tiny ORM of our own

A simple, pointless solution

In the previous post, we took a look at what happens when you call LINQ methods on IQueryable<T>, and how you can use that to build your own provider. We take that a step further this time by building an actual provider – albeit a somewhat pointless one, in that it adds LINQ support to something that doesn’t really need it. The point, though, is to keep it simple and try to understand how the process works.

The best way to understand is to take a look at the source code first:

Now, a quick summary of what this is.

We have an interface, INextProvider. It has one method, GetNext that is supposed to get the next one in a sequence of items. An example implementation that uses a simple array as the underlying store is also included. Once you have an instance of INextProvider<T>, say, callednextProvider, you can then extract an IQueryable<T> out of it with this call:

var query = nextProvider.AsQueryable();

You can then use standard LINQ on top of it. Now, I know what you’re thinking: this INextProvider seems uncomfortably similar to IEnumerator – why would we need a query provider for this? We don’t, hence the “pointless” part, but again – the idea is to examine how building a provider works.

The entry point is NextProviderQueryable which implements IQueryable<T> and uses NextProviderQueryProvider as its Provider and returns aNextProviderEnumerator from its GetEnumerator() call. This means that whenever one of the LINQ methods are called on an instance ofNextProviderQueryable, one of the following happens:

  • If the method is something that creates another queryable out of the existing one (e.g. WhereSelectSelectManyCast, etc.),NextProviderQueryProvider.CreateQuery() is called. That, in turn, creates a new instance of NextProviderQueryable, but with the Expression set to what has been passed in. Thus, every call to CreateQuery ends up creating a new queryable with the Expression property representing the complete call.
  • If the method is something that enumerates a queryable (e.g. ToListToArray, etc. or a foreach loop), the GetEnumerator() method is called and enumeration starts. This means that NextProviderEnumerator takes place. This object is initialized with the current value of Expression as of the time of enumeration, thus it has complete information to parse it, figure out what needs to be done, and then do it using the INextProvider that it is assigned. The class ExpressionParser is used to convert the expression into a series of “nodes” that act on each item in the underlyingINextProvider and do the appropriate thing based on what it is (e.g. if it’s a WhereNode, it will have a predicate that it will run on each item).
  • If the method is something that returns a scalar (e.g. AnyAllFirst, etc.), NextProviderQueryProvider.Execute is called. In our case, we simply pass control to NextProviderEnumerator to enumerate as mentioned in the previous point, and then perform the appropriate action. We do this by getting an IEnumerable<T> that uses NextProviderEnumerator as its enumerator (and that is the NextProviderEnumerable class), and then calling the appropriate IEnumerable version of the IQueryable method that has been called. All of this is handled by the ExpressionExecutor class.

As of now, only the following methods are supported: AllAnyCastCountDistinctElementAtElementAtOrDefaultFirstFirstOrDefaultLast,LastOrDefaultLongCountOfTypeSelectSelectManySingleSingleOrDefaultSkipTake and Where. If you try to use any other methods, you will get an exception. Even within these methods, if you try to use a variant that is not supported, you will get an exception.

Next time, we’ll try our hands at a more real world implementation, i.e. a tiny, tiny ORM.