Two Types of Domain Events

You can find a good primer on domain events in this post by Udi Dahan. There are some issues with his approach, though that Jimmy Bogard raises and addresses in his post. However, I was left with two questions:

  1. Shouldn’t the domain event be dispatched/handled only when the transaction or the unit-of-work commits? Because whatever changes have been made to the state of the domain isn’t really permanent until that happens.
  2. There may be cases when domain events need to trigger changes to other domain objects in the same bounded context – and all of that needs to be persisted transactionally. In other words, in this scenario – it makes sense to have the event be dispatched just before the transaction commits. However, in this case, whatever ends up handling that event also needs access to the current transaction or unit-of-work that is in play – so that all the changes make it to persistence in one fell swoop of a commit.

That leads me to conclude that there are really two types of domain events that need to be handled differently. The first type as listed above would either be infrastructure-y things like sending out e-mails and such, or sending messages to other bounded contexts or external systems. The second type would be within the same bounded context but maintain certain kinds of relations within the domain that could not be modeled within the same aggregate (simply put, they take the place of database triggers in the DDD metaphor).

At this point, I have no further design level ideas on how they would be modeled. More on that later, hopefully. I do know that the thing raising the event should not need to know what kind of domain event it is going to raise. If I am a Foo and my Bar changes, all I care about is I need to raise a BarChanged event. I do not need to know what kind of domain event BarChanged is. I will let this percolate a bit.

DDD, meet SOA

There is a lot of discussion online around whether DDD and SOA can co-exist, and if so, what that looks like. I am of the opinion that they can co-exist and have arrived at a model that seems to work for me. Consider a complex DDD system with several bounded contexts and contrast it to an SOA system – and I am including the flavor of SOA that I describe in this post.

So, how do the two come together? As I have mentioned before, one problem I have with straight-up SOA is that there is no identifiable “core” of the system where the “meat” is. This “meat” is in fact – the domain logic. Now, add to this the fact that the domain is the only thing that deals directly with the database (technically, it’s the infrastructure layer that handles persistence – but you get the idea – it is the domain constructs that are involved in persistence to/from a data store via repositories).

What if we modeled the entire domain as a service? It does not need to be strictly hosted – think of it as a service that is consumed by other services as a library. The repositories combined with the domain objects themselves can replace the accessor/persistor class of services. This makes sense in terms of volatility as well – since the domain really needs to be the part that is least volatile. You can then think of the application layer and the infrastructure layer as being made up of all the other services – and these would still follow the whole utility / computational service / use case handler hierarchy. You would not allow the domain layer to call utility services though – but that is something you would handle through domain events and the like. The presentation layer is still the presentation layer – and is where the UI lives, or the external hosting interface to your services.

The mode of communication between bounded contexts does not change. You can now think of each bounded context as its own SOA subsystem. Communication within bounded contexts would still happen through queues, messages and subscriptions. The application and infrastructure layer would also have “domain event handler” services that handle domain events – I guess you could argue that is just one type of a use case handler. The communication between bounded contexts, as mentioned earlier, stays the same.

A Method for Service Oriented Architecture (SOA)

When you adopt service oriented architecture (SOA), the most important aspect of your architecture and high-level design step when building a new system is obviously decomposition of the system into the right services. A prudent way to decompose a system into services is to first identity what parts of the system is more likely to change more frequently. Thus, you decompose by volatility and set up dependencies such that you always have more volatile services calling less volatile services. Within the same level of volatility, of course, you would further decompose services by function if needed.

This, it is said, makes for a much more maintainable system that is much more adaptable to business requirement changes – which are obviously the norm in our industry. There is a school of thought, however, that goes one level beyond this. It classifies a service as one of the following four types (in increasing order of volatility): utilities, accessors/persistors, computational services and use case handlers.

A utility service is something that deals solely with cross cutting concerns – throughout the system (such as security, logging, configuration), or throughout a given domain (e.g. persisting of documents in a larger system of which documents are a part). Utilities are mostly purely technical and can be called by all services.

An accessor/persistor is the service that sits closest to the data at hand – be it some external resource, or as is more likely to be the case, the database. These services deal primarily with retrieval and storage, but also logical grouping of the data in a manner that makes sense to the rest of the application (think aggregates in DDD terms), data-level validation, redaction for security, etc. Since these services are closest to the metal (that being the database), and we don’t want volatility to sneak in there, these need to be the least volatile services apart from utilities. These are the only services that can talk to data sources, and can be called by use case handlers and computational services. Utilities cannot call them. Other accessors/persistors cannot call them either.

A computational service will have hardcore computational logic that can change more often than, say, your database schema, but less often than specific use cases. They can call utilities and accessors/persistors; but usually it is advised that they be purely computational – in that they get the data they need to work with handed to them, they do their thing, and then they return their output. Only use case handlers can call them.

A use case handler is most volatile and executes business use cases by mostly orchestrating calls to less volatile services. They can call utilities, computational services and accessors/persistors as needed. They also handle operation level authentication and authorization if applicable. They are the external interface to the “service platform” in that all consumers, be it the UI or external systems – can only talk to use case handlers. Less volatile services are thus shielded. What if a use case handler needs to call another use case handler? Within a contiguous subsystem, that should not happen. If it happens, that is a design smell. That could become necessary to facilitate communication between two subsystems – in which case, it is a better idea to do it through queues, messages and subscriptions (think bounded contexts in DDD terms).

There are a few more guidelines to top these off:

  • Name your services so their role in the above taxonomy becomes clear. Instead of just “FooService”, think “FooUtility”, or “FooRepository” for accessors/persistors, “FooCalculator” or “FooEngine” for computational services, and “FooHandler” or “FooManager” for use case handlers.
  • As you are performing detailed design, keep an eye on the number of operations in a service. If it is starting to feel like a lot, perhaps it is time to decompose a bit further.
  • Treat each service as “the” unit of development for your system. This means – each service is a self-contained unit complete with tests and such. This also means that while you want to stay DRY within a service within reasonable limits, it may not be the best idea to strive for perfect DRYness across services.
  • Each operation should validate its requests and handle authentication and authorization if needed in a self-contained manner.

All of this also aligns pretty well with SOLID design principles. Now, all of this sounds great on paper; however, there are a few pain points to consider. If you strictly adhere to these restrictions, you will end up creating a lot more services than if you didn’t. For example, there are a lot of cases where the use case simply is to go get the data and go store the data. In such situations, you are forced to create a use case handler that is largely pass-through to an accessor/persistor. You could relax these rules in such situations, but then you have a dependency from your consumer to your accessor/persistor. If, at some time, the use case evolves, you need to make sure an use case handler is inserted at that point rather than the so-called “non-volatile” accessor/persistor being modified.

The other obvious pain point with this approach is code bloat. More services means more code, and that means more code to maintain. I think when you get to a system of a certain size, that volume of code becomes justifiable. So, there is a system size below which a lot of this is too much overhead. It is wise to identify that point and react accordingly. Perhaps better tooling could help, too – something tailored for building systems this way.

One problem I have with this – and in fact with SOA in general is that your system is made up of all these services that have logic distributed within them. If you decomposed by volatility and then by function, then granted – your logic is distributed properly. There still is no identiable “core” of the system where the “meat” is – so to speak. That is something that DDD addresses in my opinion. Hence my increasing interest in meshing DDD and SOA. More on that later.

Getting on the Domain Driven Design Bandwagon

Domain driven design has been around for quite a while. I believe the definitive book on it by Eric Evans came out first in 2004. For whatever reason, I had not been exposed to it in places I worked. I had been hearing about it for enough time and from enough smart people to give it a try. I researched it online a bit and went through quite a few articles. Especially, the set of articles on DDD by Jimmy Bogard (Los Techies) was quite helpful. Finally, I ended up buying Evans’ book and reading it cover to cover.

I liked what I saw. The whole idea behind keeping your domain logic encapsulated within your domain objects appealed to me. There were questions, obviously, but I figured it was worth trying out. So that is what I am deep into currently. The idea of entities, value objects, aggregates and aggregate roots makes sense, but at the same time, also raises questions – especially with regards to database performance. I am hoping I will arrive at satisfactory answers.

As things get more complex, other concepts such as bounded contexts and domain events enter the picture. While I get them in theory, my idea for now is to stay away from actually getting hands-on with those until I have a good handle on “vanilla” DDD. Another question I have is how this meshes with SOA – whether the two are complimentary or exclusive. I would hate to have to give up SOA to stick with DDD. In any case, it feels exciting – and I can’t believe it has been around for so many years and I never got into it.

For anyone getting into DDD, I strongly recommend reading Evans’ book. In software timescale, it was written aeons ago (when Java was the state-of-the-art, mind you). But all of it still applies, and if you’re working with something like C#, as I am, things become even easier since you have so much more power with these modern languages.

So, for the moment, let’s say I am on the bandwagon. Hopefully I don’t have to get off.

An Easy Service Proxy Executor for WCF

If you have adopted service oriented architecture (SOA) and are using WCF as the hosting/communication mechanism for your internal services, chances are you are doing one of two things: you publish each service like any old WCF service and your other services which are consumers of said published service consume it through its WSDL; or you create shared libraries that include the contract information that both the service and its consumer reference. Both are somewhat cumbersome but can be managed. If all your services are internal, though, going the WSDL route is somewhat of an unnecessary overhead and is just a bit more unmanageable.

Now, if you decide to go the second route, but still stick to the more obvious interface that WCF provides to instantiate and call proxies (ClientBase and the like), that is a bit of a waste – since those classes were built with generated-code proxies in mind. In that case, the better option really is to have a mechanism to obtain a ServiceEndpoint and use that along with the contract information to create your own ChannelFactory – where you can then call CreateChannel to get your proxy. A lot less code and a lot more manageable.

To this end, for my own purposes, I built a bunch of classes that comprise my WCF service executor module. This is part of the Services namespace in the new Commons Library. Here are what a few of the key classes look like – you should be able to surmise how they can be used. The most common usage example would be:

var response = ServiceCallerFactory
   .Create<IMyContract>()
   .Call(x => x.MyOperation(request));

 

IServiceCaller

public interface IServiceCaller<out TChannel>
{
     void Call(Action<TChannel> action);
     TResult Call<TResult>(Func<TChannel, TResult> action);
}

ServiceCaller

public class ServiceCaller<TChannel> : IServiceCaller<TChannel>
{
      private readonly ServiceEndpoint endpoint;
      private readonly EndpointAddress endpointAddress;

      public ServiceCaller() {}

      public ServiceCaller(ServiceEndpoint endpoint)
       {
             this.endpoint = endpoint;
       }

      public ServiceCaller(EndpointAddress endpointAddress)
      {
             this.endpointAddress = endpointAddress;
      }

      public void Call(Action<TChannel> action)
      {
             var channelFactory = this.endpoint != null
                   ? new ChannelFactory<TChannel>(this.endpoint)
                   : new ChannelFactory<TChannel>();

            if (this.endpointAddress != null) channelFactory.Endpoint.Address = endpointAddress;

            var channel = channelFactory.CreateChannel();
             try
             {
                   action(channel);
             }
             catch
             {
                   channelFactory.Abort();
                   throw;
             }
             finally
             {
                   channelFactory.Close();
             }
       }

      public TResult Call<TResult>(Func<TChannel, TResult> action)
      {
             var channelFactory = this.endpoint != null
                   ? new ChannelFactory<TChannel>(this.endpoint)
                   : new ChannelFactory<TChannel>();

            var channel = channelFactory.CreateChannel();
             try
             {
                   return action(channel);
             }
             catch
             {
                   channelFactory.Abort();
                   throw;
             }
             finally
             {
                   channelFactory.Close();
             }
       }
}

ServiceCallerFactory

public static class ServiceCallerFactory
{
       private static readonly object serviceCallerMapLock = new object();

      private static readonly IDictionary<Type, ServiceCaller> serviceCallerMap = new Dictionary<Type, ServiceCaller>();

      public static Func<Type, ServiceEndpoint> ServiceEndpointAccessor { get; set; }

      public static IServiceCaller<TChannel> Create<TChannel>(EndpointAddress endpointAddress = null)
      {
             ServiceCaller caller;
             if (serviceCallerMap.TryGetValue(typeof (TChannel), out caller))
                   return (IServiceCaller<TChannel>) caller;

            lock (serviceCallerMapLock)
            {
                   if (serviceCallerMap.TryGetValue(typeof (TChannel), out caller))
                         return (IServiceCaller<TChannel>) caller;

                   if (ServiceEndpointAccessor != null)
                   {
                         var serviceEndpoint = ServiceEndpointAccessor(typeof (TChannel));
                         if (endpointAddress != null) serviceEndpoint.Address = endpointAddress;
                         caller = new ServiceCaller<TChannel>(serviceEndpoint);
                   }
                   else
                   {
                         caller = endpointAddress == null
                               ? new ServiceCaller<TChannel>()
                               : new ServiceCaller<TChannel>(endpointAddress); 
                   }

                  serviceCallerMap[typeof (TChannel)] = caller;
            }

            return (IServiceCaller<TChannel>) caller;
      }
}

Bye, Bye, TypeScript, for now

As much as I raved about TypeScript in this post from some time ago, sadly the time has come for me to part with it – at least for now. It is a beautiful piece of work by a beyond-brilliant group of people. As I worked more and more with JavaScript the past year, though, I realized a few things.

The first, and this I already mentioned in my previous post, is that it is still maturing and is not quite there yet. One of my pain points was the lack of object initializers that, in my opinion, took away some of the expressiveness of JavaScript. However, as I now look at it, it is the whole idea of trying to hide the fact that everything in JavaScript is a hash-map. Thus, you can and should be able to create an object or assign an object on the fly using JSON notation. As soon as you introduce TypeScript annotations into the mix, this goes away. The best of both worlds here would be if I could have it annotated and still be able to assign or initialize using JSON (and have the JSON be validated based on the annotation).

The other side to that equation is just the ability to pass tuples around like primitive variables. That is what you get with JSON objects – and while it looks unstructured when looked at from the lens of a more strict language, it is in fact a feature by design. Similar reasoning can be applied to the whole idea of what functions are in JavaScript, how they can define scope, how they can nest, and so on. I am not sure how much having the syntactic sugar of modules and classes helps in that regard.

Of course, TypeScript is a superset – so you can choose to use TypeScript where you wish and have vanilla JS in other places – but then you end up with this asymmetrical mess that still needs to go through the TypeScript compiler before it can work. I do not like asymmetry.

The second reason is something I find hard to articulate, but have experienced nonetheless. I think TypeScript gels fine with Angular, but as soon as you start to use certain prominent frameworks like RequireJS or Jasmine, it starts to get in the way somewhat. Regardless, though, having to go look for “d.ts” files every time you want to use a library is a pain.

The third reason is more of an invalidation of one of the merits of TypeScript as I initially saw – and that was tooling support. At first glance I was quite impressed by the IntelliSense and what not TypeScript brought to my humble Visual Studio JavaScript editor. However, the tooling since then has gotten a lot better in major IDEs just for vanilla JavaScript – up to a point I feel that it is not worth putting up with the overhead of having an add-on running.

Of course, even as I write this, my opinions are based on an early version that I have been using. I know it is progressing rapidly, so it may become a viable option at one point. Since ES6 is moving in a direction similar to TypeScript, however, I believe most libraries will adapt to the new specifications from ECMA – thus rendering TypeScript unnecessary. In any case, at least for now, I have decided to bid farewell to TypeScript.

Bootstrap Modal with AngularJS

We’ll look at a relatively low hanging fruit in case you’re working with vanilla AngularJS and Twitter Bootstrap and are not relying on other add-ons such as AngularUI’s Bootstrap extension. One common need I have is to be able to show or hide Bootstrap modals based on a property on my view-model. Here’s a simplified view of the controller:

var app = angular.module('app', ...);
...

app.controller('ctrl', function ($scope, ...) {
    ...
    $scope.showModal = false;
    ...
});

And here is the HTML:

<a href="#myModal" data-toggle="modal">Show Modal</a>
...
...
<div id="myModal" data-backdrop="static">
    <div>
        Modal text goes here.
        <br/>
        <button data-dismiss="modal">Close</button>
    </div>
</div>

In order to maintain separation of concerns, I want to be able to show or hide the modal as the value of showModal changes. This is another good use for directives in AngularJS. As with the datepicker example, we need a directive that will add a watch on link and use the JavaScript methods available with Bootstrap to control the modal, rather than the data-toggle or data-dismiss attributes.

The directive would then look like:

app.directive('akModal', function() {
    return {
        restrict: 'A',
        link: function(scope, element, attrs) {
            scope.$watch(attrs.akModal, function(value) {
                if (value) element.modal('show');
                else element.modal('hide');
            });
        }
    };
});

Here, we are calling the Boostrap method modal on the element to which the directive is applied, i.e. the div that is the modal container. The HTML modified to work with this directive then looks like:

<a href="#" ng-click="showModal = true">Show Modal</a>
...
...
<div ak-modal="showModal" data-backdrop="static">
    <div>
        Modal text goes here.
        <br/>
        <button ng-click="showModal = false">Close</button>
    </div>
</div>

The modal display is now bound to showModal. Note how we got rid of data-toggle (along with the id on the div) and data-dismiss. Now, if some property on the view-model needs to control whether the modal is displayed, then it would not make sense to have a link hardwired to trigger the specific modal anyway. The case for data-dismiss is different though.

Another thing to consider is – if you have a lot of modals and a lot of different view-model properties controlling them, you are going to have a lot of watches, which you probably don’t want. If we make the assumption that mostly you’re going to have one modal visible at a time (unless you have multiple levels of modals going on – in which case personally I think you would need to rethink the UX you are providing), you can make something more generic such as a modalService that will work with a single modal div and have a showModal operation that takes the content to display in the modal. There would need to be a corresponding hideModal operation as well, of course. I plan to explore this further.

Now, back to the data-dismiss thing. What we have at the moment is somewhat of a one-way binding. It would be ideal if this could be made two-way so that closing the modal using data-dismiss automatically set showModal to false. At the moment, I have not given this enough effort to be able to do it in an acceptably performant way. If someone has, I would love to hear about it.