Think Before Coding

To content | To menu | To search

Tag - Design

Entries feed - Comments feed

Monday, July 29, 2013

Command Events and Context

A remark by @serialseb after my previous post :


First, here is some the distinction between a Command and an Event.

A command express what you want to do. Usually expressed with a verb in the present tense:

A event express what actually happened. Usually expressed with a verb in the past tense;

As you can anticipate, what you request is not always what actually happens:

A Command is still an Event ?

Let’s go a bit meta.

The fact that you decide you want to do something is something that happens in time. So it can also be considered as an event !

True… but what’s the usefulness of this ?

Someone who would like to study the correlation between your desires and their realizations will model both as events..

This can be the same in a tracking context. Track both causes and effects..

Should we just ignore the distinction then ?

Surely not !

It’s here again a matter of context. Of Bounded Context.

Let’s take a sample with different kind of concepts, Value Object and Entities, to highlight how the same thing can take different modeling forms in different contexts:

Money is usually modeled as a Value Object, which implies equality by Value. I give you $10 you give me $10, we’re even, even if I give you a bill and you give me some coins.

In the context of fraud tracking, this can be very different. Bills have a tracking number, and you can model it as entity.

If you make both context work together you’ll go through an Anti Corruption Layer for context mapping. You’ll not model all your accounting with entities for money because some other context models it this way !

In the case of Command and Events, some contexts can be interested in the fact that a decision was made, so this will be modeled as an event. But in the context where this decision happens, the decision is different from the actual outcome, and not being clear about this difference can lead to several issues..

For example if you model a withdrawal with  a Withdrawal concept: It is not clear whether it represe,ts the fact that you asked it to happen or the fact that is has actually been done.

Being explicit on it with WithdrawMoney / MoneyWithdrawn removes the ambiguity.

In the context of event sourcing, this distinction is very important as the previous article showed. As is the distinction between upstream events and internal events, since upstream events is input and will only produce things you’d like to happen, so are closer to commands from the system point of view.

Of course if your context is not about time and decision, don’t use these concepts that would be useless. You’re writing a compiler ? Forget about all this.

Should I have Command DTOs ?

not necessarily, a command can be modeled as a function/method call. So creating Command object is not mandatory.

It can still be useful for dispatch, storage for diagnostics, functional test etc.

Sunday, July 28, 2013

Event Sourcing vs Command Sourcing

Last month, I made a presentation about Event Sourcing - a shorter version of my DevoxxFr talk. After me, Etienne and Maher from Sfeir, did a presentation on the same subject and their architecture inspired by LMAX.

I immediately noticed their reference to this Event Sourcing page by Martin Fowler, and started to see several point of pain lurking in their long term maintenance…

I won’t make a flaming post against Martin Fowler that has written lots of interesting stuff. Even this article says nothing wrong. It just takes a way that can cause long term pains as expressed in the page itself…

Sourcing Events, but which Events ?

The article starts with a totally valid definition of Event Sourcing:

Capture all changes to an application state as a sequence of events.

The question then is… where do these events come from ?

In this case when the service is called, it finds the relevant ship and updates its location. The ship objects record the current known state of the ships.
Introducing Event Sourcing adds a step to this process. Now the service creates an event object to record the change and processes it to update the ship.

A C# sample a bit further to make it clearer:

class EventProcessor...
IList log = new ArrayList();
public void Process(DomainEvent e) {

As you can notice, the Event is produced before reaching the event processor…

Constrast this with the following version:

class Cargo
IList log = new List();
private State currentState;
public Cargo(IEnumberable events)
foreach(var @event in events)
Apply((dynamic) @event);
public void Arrive(Port port)
// logic to verify the action can be done
// based on current state and command parameters
if (IsAlreadyInPort) throw Exception();
// create an event of what happened with this action
// it should not mutate state,
// but it can capture external state when it arrives
// it's also based on current state and command parameters
var @event = new ShipArrived(id, port, DateTime.Now)
// apply change du to the event
// it should require only current state and
// events will be published to the rest of the system
// from there.. This is where further side effect will
// occure
private void Apply(ShipArrived @event)
// no decision should happen here !
currentState.Port = @event.Port;
currentstate.LastMove = @event.Time;

From a functional point of view this pattern can be build from two pure functions:

Command -> State -> Event list
State -> Event -> State

Here, stored event has been produced by the aggregate itself ! The output is stored.

Nice, but why should I care ?

After all, since Martin says the first version is ok… let’s go !

This would be without noticing several warnings in the rest of the article.

External Systems

from the same page:

One of the tricky elements to Event Sourcing is how to deal with external systems that don't follow this approach (and most don't). You get problems when you are sending modifier messages to external systems and when you are receiving queries from other systems.

Many of the advantages of Event Sourcing stem from the ability to replay events at will, but if these events cause update messages to be sent to external systems, then things will go wrong because those external systems don't know the difference between real processing and replays.

The second version doesn’t suffer this problem…

Because rebuilding the state (like done in the constructor) only use the Apply method (or the ApplyStateChange function in the functional version)..

This Apply method only works with internal state and produces no external side effects..

External Queries

Another problem arising with Martin Fowler’s proposal:

The primary problem with external queries is that the data that they return has an effect on the results on handling an event. If I ask for an exchange rate on December 5th and replay that event on December 20th, I will need the exchange rate on Dec 5 not the later one.

Here again, the second version doesn’t suffer the problem..

The data from the external system will be used to build the event. It can be directly stored in it (like the current time in the sample), but can also be used in a computation. For instance, command contains prices in USD, query an current rate from USD to EUR, compute a price in EUR and put it in the event.
The rate at the time of the computation is baked in the event ! No need to remember the rate value afterward, especially no need to complex external system gateway.

It could still be better for debugging purpose to put the used rate explicitly in the event.

But the second version intrinsically handles this issue gracefully…

External Interactions

Both queries and updates to external systems cause a lot of complication with Event Sourcing. You get the worst of both with interactions that involve both. Such an interaction might be a an external call that both returns a result (a query) but also causes a state change to the external system, such as submitting an order for delivery that return delivery information on that order. 

Problem solved by version 2….

Code Changes

So this discussion has made the assumption that the application processing the events stays the same. Clearly that's not going to be the case. Events handle changes to data, what about changes to code?
The third case is where the logic itself changes over time, a rule along the lines of "charge $10 before November 18 and $15 afterwords". This kind of stuff needs to actually go into the domain model itself. The domain model should be able to run events at any time with the correct rules for the event processing. You can do this with conditional logic, but this will get messy if you have much temporal logic. The better route is to hook strategy objects into a Temporal Property: something like chargingRules.get(aDate).process(anEvent). Take a look at Agreement Dispatcher for this kind of style.

Wooo… when I read this, it’s a red flag for me ! I never want to deal with this kind of problems !

Especially if they’re expected to happen for sure !

How does it go with the second version ?

Events are produced by the code that contains the logic. Before November 18, the events emitted where based on code that charge $10. After, the code charges $15.

When using the Apply method, it doesn’t have to know how much to charge, it’s already in saved events !

There is no need to keep an history of versions of domain logic - except in your source control !

It can even cop with changes far more complex that the one in this sample. In any case, all data needed to compute current state has been put in the event.

Correcting logic bugs

One of the advantages advanced by Martin Fowler, is that you can change how you take the decision after the fact.

But if an event is an event, it already happened, and there’s no way we can go back in time to change it. We wont be able to change external side effects anyway, so just accept it.

It’s still possible to apply compensations… like accountants. When they charged you to much, they don’t use a time machine to make has if nothing happened.. the just add a chargeback entry at the end of the ledger.

Command Sourcing ?

I call the pattern proposed by Martin Fowler Command Sourcing.

A Command is a request made to the system to do something. At this point a lot of thing can still happen. It can fail, it can be influenced by external state..

An event is something that happen and that cannot be changed.

You can protest that an Arrival Event is an event, not a command.

Sure, but for the system it’s an upstream event. Not something that happened in the system.

Where’s the difference in the second verions ?

The upstream version will go to a reactor that will produce an Arrive command (notice the present tense) inside the system.
The system will then produce a ShipArrived event (notice the passed tense). This event has been emitted by the system itself.

The Martin Fowler’s version takes a shortcut: bypassing the reactor emitting a command, but it is equivalent to sourcing commands.

Martin, this is a call to action !

Since a lot of people have read, and will read the entry on your web site, can you add something about the pattern described here to reduce the pain of people that will want to start with Event Sourcing ?

Thursday, November 5, 2009

Event Sourcing and CQRS, Serialization

Be sure to read the three preceding parts of the series:

Event Sourcing and CQRS, Now !  
Event Sourcing and CQRS, Let’s use it
Event Sourcing and CQRS; Dispatch-options

Today, we’ll study to a required part of the event storage : Serialization/Deserialization

The easy way

The .Net framework as several serialization technologies that can be used here, Binary serialization, XML serialization or even DataContract serialization introduced with WCF.

The penalty

The particularity of Event Sourcing is that we will never delete or update stored events. They’ll be logged, insert only, once and forever.

So the log grows. grows. grows.

Event storage size will influence greatly the growth rate of the log.

Xml Serialization

If your system processes frequently lots of events, forget about XML. Far to verbose, you’ll pay the Angle Bracket Tax.

Binary Serialization

But the binary serialization still cost much, even if compact, it will contain type names and field names…

Raw Serialization

You could write serialization/deserialization code into your type.

The type can chose a format, so no extra type/field name is needed. This kind of serialization is very compact – it contains only required bits – but you cannot read data back without the deserialization code.

It can be ok if you plan to have a definite small number of well documented events. Unmanageable if your event type count will grow with time and versions.

Avoid it

Let’s consider how data are stored in a database.

A database contains tables. Tables have a schema. When storing a row, no need to repeat column names on each cell. The data layout is defined by the table schema and will be the same on each row.

We cannot do the same since events have different schemas, but we work with a limited set of events that will occur many times.

Split schema and data

We can thus store schemas aside, and specify the row data schema on each row. The event data will the be stored as raw bits corresponding to specified schema.

This way you can design tools to explore your log file with complete event representation without needing the original event class, and you got a very compact serialization. Have your cake and eat it too !

Stay tuned, the code comes tomorrow…

Tuesday, November 3, 2009

Event Sourcing and CQRS, Dispatch options.

As seen in previous post, I used dynamic to replay events.

The main reason to use it was to avoid long code using reflection in the infrastructure that would have made it hard to read.

I’ll show several ways to do this dispatch with pros and cons in each cases.


The proposed solution was using dynamic.

+ Pros : there is no reflection code involved, code is very simple.
- Cons : all state change (Apply) methods must have the same name.

I made no performance test, so I cannot judge if perf is better or not. It seems that the DLR has a rather good cache when the same type is encountered several time, but only measures can tell.

Handlers registration

This is the current implementation in Mark Nijhof’s sample.

The base class maintains a dictionary of Type/Action<T> association to dispatch events based on type.

Since an Action<T> delegate must have a target instance, the delegate must be constructed from within the instance, in the .ctor.

    public class AggregateRoot<TId>


        readonly Dictionary<Type, Action<object>> handlers =

              new Dictionary<Type, Action<object>>();


        protected void Register<T>(Action<T> handler)


            handlers.Add(typeof(T),e => handler((T)e));



        protected void Replay(IEnumerable<object> events)


            foreach (var @event in events)



        // rest of the aggregate root class


Here is code that use it :


    public class Book : AggregateRoot<BookId>


        private readonly BookId id;

        public Book(BookId id,IEnumerable<object> events) : this(id)





        public Book(BookId id,string title, string isbn) : this(id)


            var @event = new BookRegistered(id, title, isbn);





        private Book(BookId id)


   = id;






        private void OnBookRegistered(BookRegistered @event) { /**/ }

        private void OnBookLent(BookLent @event) { /**/ }

        private void OnBookReturned(BookReturned @event) { /**/ }


+Pros : Still no reflection,
            Meaningful method names
-Cons : Additional plumbing code, 
            Private constructor to avoid repetition
            Registration occurs at each instantiation

Convention Based Method Naming

This is the way advocated by Greg Young.

If your event is called BookRegistered, assume the method will be called OnBookRegistered, and find it by reflection. You can implement a cache at class level to avoid reflection on each dispatch.


    public abstract class AggregateRoot<TId> : IAggregateRoot<TId>


        private static readonly Dictionary<Type, IEventDispatcher> Handlers =

               new Dictionary<Type, IEventDispatcher>();

        private static readonly object HandlersLock = new object();



        protected void Replay(IEnumerable<object> events)


            var dispatcher = GetDispatcher();

            dispatcher.Dispatch(this, @events);



        private IEventDispatcher GetDispatcher()


            IEventDispatcher handlers;

            var type = GetType();

            lock (HandlersLock)


                if (!Handlers.TryGetValue(type, out handlers))


                    handlers = EventDispatcher.Create(type);

                    Handlers.Add(type, handlers);



            return handlers;


        ... rest of the code here


The dispatcher code :

    internal interface IEventDispatcher


        void Dispatch(object target, IEnumerable<object>events);


    internal class EventDispatcher<T> : IEventDispatcher


        private readonly Dictionary<Type, IEventHandler<T>> handlers;


        public EventDispatcher()


            var h = from m in typeof(T)

              .GetMethods(BindingFlags.Instance | BindingFlags.NonPublic)

                    let parameters = m.GetParameters()

                    where parameters.Length ==1

                    && m.Name == "On" + parameters[0].ParameterType.Name

                    select EventHandler.Create<T>(m);


            handlers = h.ToDictionary(i => i.EventType);



        public void Dispatch(object target, IEnumerable<object> events)


            var typedTarget = (T)target;

            foreach (var @event in events)


                var handler = handlers[@event.GetType()];

                handler.Call(typedTarget, @event);





    internal static class EventDispatcher


        public static IEventDispatcher Create(Type type)



            return (IEventDispatcher)Activator.CreateInstance(




and the event handler :

    internal interface IEventHandler<T>


        void Call(T target, object argument);

        Type EventType { get; }


    internal class EventHandler<TEntity, TEvent> : IEventHandler<TEntity>


        private readonly Action<TEntity, TEvent> handler;


        public EventHandler(MethodInfo methodInfo)


            handler = (Action<TEntity, TEvent>)Delegate.CreateDelegate(

                  typeof(Action<TEntity, TEvent>), methodInfo, true);




        public void Call(TEntity target, object argument)


            handler(target, (TEvent)argument);



        public Type EventType


            get { return typeof(TEvent); }




    internal static class EventHandler


        public static IEventHandler<T> Create<T>(MethodInfo methodInfo)


            var eventType = methodInfo.GetParameters()[0].ParameterType;


            return (IEventHandler<T>)Activator.CreateInstance(


                  .MakeGenericType(typeof(T), eventType),





The trick here is to create a static delegate with two parameters from an instance method info that take one parameter (and one implicit this target).

This way, the delegate is not tied to a specific instance and can be used on any target.

As you can see, this option requires more code ! I did not want to start with that.

+Pros : Convention base names mean no manual mapping, mapping is implicit
            Binding is made a class level instead of instance level

-Cons : Only unit tests can tell when you mess with names
            Not immune to event name change, should have good unit tests !

Apply then Append

I also had a remark that if I forget Append after Apply, I’ll get in trouble.

In Handler Registration option and Convention base method naming, the dispatch can be done by the base class, so I could tell the base class to dispatch then Append then event to UncommittedEvents.

This way you end with something like :

            var @event = new BookLent(/**/);


where play dispatches the event to the right method and appends.

This way you cannot forget.

My problem with this, especially in the Convention base method naming scenario is that nobody references the event application methods anymore. Resharper will report them as unused methods, and you won’t know unless you run unit tests.

Moreover, you pay the cost of a dynamic dispatch when you know your event type.

Perhaps something like this could be better :

            var @event = new BookLent(/**/);


the implementation is not very complicated :

    public class AggregateRoot<TId>


        private readonly UncommittedEvents uncommittedEents;


        protected EventPlayer<TEvent> Play<TEvent>(TEvent @event)


            return new EventPlayer<TEvent>(@event, uncommitedEvents);


        ... rest of the code here



    public struct EventPlayer<TEvent>


        private readonly TEvent @event;

        private readonly UncommittedEvents uncommittedEvents;

        internal EventPlayer(TEvent @event, UncommittedEvents uncommittedEvents)


            this.@event = @event;

            this.uncommittedEvents = uncommittedEvents;



        public void With(Action<TEvent> handler)






This way, methods are referenced at least once with type check.

My mind is still not set… What do you prefer ?

Monday, November 2, 2009

Event Sourcing and CQRS, Let's use it.

Last time, we started a very basic Event Sourcing/Domain Events/CQRS framework. Be careful, I made an edit in the nested DomainEvents+Handler<T>.Handles<E>() method, the AggregateRoot.Replay method will not work as is, but we won’t need it.

We’ll build an equally simplistic application for personal library management.

The Ubiquitous Language will be minimal.

A Book can be Registered with a Title and an ISBN.

A Book can be Lent to a Borrower at some Date for an Expected Time Span.

A Book can then be Returned. If it is Returned after Expected Time Span, the return is Late.

That’s enough for our first try.

The Command Context

The State Change Events

Here is the code for the three events that we found in the Ubiquitous language:

    public class BookRegistered


        public readonly BookId Id;

        public readonly string Title;

        public readonly string Isbn;


        public BookRegistered(BookId id, string title, string isbn)


            Id = id;

            Title = title;

            Isbn = isbn;




    public class BookLent


        public readonly BookId Id;

        public readonly string Borrower;

        public readonly DateTime Date;

        public readonly TimeSpan ExpectedDuration;


        public BookLent(BookId id, string borrower, DateTime date,

               TimeSpan expectedDuration)


            Id = id;

            Borrower = borrower;

            Date = date;

            ExpectedDuration = expectedDuration;




    public class BookReturned


        public readonly BookId Id;

        public readonly string By;

        public readonly TimeSpan After;

        public readonly bool Late;


        public BookReturned(BookId id, string @by, TimeSpan after,

             bool late)


            Id = id;

            By = by;

            After = after;

            Late = late;



These events will usually be serialized to the event storage and on a service bus, but here everything runs in memory.

The Book Aggregate Root

The book will need to be referenced by an identity in our system. We’ll hide a Guid behind a BookId struct :

    public struct BookId : IEquatable<BookId>


        private Guid id;


        private BookId(Guid id) { = id; }


        public static BookId NewBookId() { return new BookId(Guid.NewGuid()); }


        public bool Equals(BookId other) { return; }


        public override bool Equals(object obj)


            if (ReferenceEquals(null, obj)) return false;

            if (obj.GetType() != typeof(BookId)) return false;

            return Equals((BookId)obj);



        public override int GetHashCode() { return id.GetHashCode(); }


Now, the Book class itself :

  public class Book : AggregateRoot<BookId>


      private readonly BookId id;

      private string title;

      private string isbn;

      private string borrower;

      private DateTime date;

      private TimeSpan expectedDuration;


      public Book(BookId id, IEnumerable<object> events)


 = id;

          foreach (dynamic @event in events)




      public Book(BookId id, string title, string isbn)


 = id;

          var @event = new BookRegistered(id, title, isbn);





      public override BookId Id { get { return id; } }


      public void Lend(string borrower, DateTime date,

                              TimeSpan expectedDuration)


          if (this.borrower != null)

              throw new InvalidOperationException("The book is already lent.");

          var @event =

                new BookLent(id, borrower, date, expectedDuration);





      public void Return(DateTime returnDate)


          if (borrower == null)

              throw new InvalidOperationException("The book has not been lent.");

          if (returnDate < date)

              throw new ArgumentException(

                "The book cannot be returned before being lent.");

          var actualDuration = returnDate - date;

          var @event = new BookReturned(




                         actualDuration > expectedDuration);





      private void Apply(BookRegistered @event)


          title = @event.Title;

          isbn = @event.Isbn;



      private void Apply(BookLent @event)


          borrower = @event.Borrower;

          date = @event.Date;

          expectedDuration = @event.ExpectedDuration;



      private void Apply(BookReturned @event)


          borrower = null;



The class implements AggregateRoot<BookId> and so provides an explicitly implemented UncommittedEvents property.

The first .ctor is used to load the Aggregate Root, the second one is used to build a new Aggregate Root.

The public methods (Lend and Return) are the commands on the Aggregate Root as defined in the Ubiquitous Language.

The structure is always the same :

  1. Validate arguments and state
  2. Prepare state transition using domain logic
  3. Apply state transition (no domain logic should happen here)
  4. Append state transition to uncommitted events

The first .ctor uses dynamic to dispatch each event object on the corresponding specific Apply method. In case you implement the pattern is previous C# version, it is advised to provide a Replay method in the base class that will perform the dynamic dispatch based on reflection.

That’s all for the entity. No ORM, no mapping… easy.

The Repository

It is often clearer to provide a specific repository interface that exposes only available methods. With event sourcing, it’s not that useful… we’ll write it anyway in case you’d like to use dependency injection. The interface is part of the domain and should be in the same assembly as the entity and the events.

    public interface IBookRepository


        void Add(Book book);

        Book this[BookId id] { get; }


The implementation will simply derive from the Repository base class, it can be in the application assembly.

    internal class BookRepository :

        Repository<BookId, Book>,



        protected override Book CreateInstance(BookId id,

            IEnumerable<object> events)


            return new Book(id, events);



Add and the indexer are implemented by the base class. The only thing to provide is a way to instantiate the class with expected parameters.

We could use Activator.CreateInstance or reflection to provide a generic implementation. I choose to make it simpler to read.

The Query context

The Report Database

We’ll mimic a reporting table of book lent state :

This would be the data returned from table rows :

    public class BookState


        public BookId Id { get; set; }

        public string Title { get; set; }

        public bool Lent { get; set; }


And this will hide the data table implementation :

    public interface IBookStateQuery


        IEnumerable<BookState> GetBookStates();

        BookState GetBookState(BookId id);

        IEnumerable<BookState> GetLentBooks();


        void AddBookState(BookId id, string title);

        void SetLent(BookId id, bool lent);


We can simply query data to report in the UI, and update data state.

Implementation will be in memory for now :

    class BookStateQuery : IBookStateQuery


        private readonly Dictionary<BookId, BookState> states =

                     new Dictionary<BookId, BookState>();


        public IEnumerable<BookState> GetBookStates()


            return states.Values;



        public BookState GetBookState(BookId id)


            return states[id];



        public IEnumerable<BookState> GetLentBooks()


            return states.Values.Where(b => b.Lent);



        public void AddBookState(BookId id, string title)


            var state = new BookState { Id = id, Title = title };

            states.Add(id, state);



        public void SetLent(BookId id, bool lent)


            states[id].Lent = lent;



The important point here is that no domain logic occurs.

A RDBMS implementation could use an ORM or simply build DTOs from a DataReader.

The event handlers

We can now denormalize domain states to the reporting database using an event handler :

    class BookStateHandler :





        private readonly IBookStateQuery stateQuery;


        public BookStateHandler(IBookStateQuery stateQuery)


            this.stateQuery = stateQuery;



        public void Handle(BookRegistered @event)


            stateQuery.AddBookState(@event.Id, @event.Title);




        public void Handle(BookLent @event)


            Console.WriteLine("Book lent to {0}", @event.Borrower);

            stateQuery.SetLent(@event.Id, true);



        public void Handle(BookReturned @event)


            Console.WriteLine("Book returned by {0}", @event.By);

            stateQuery.SetLent(@event.Id, false);



The Console.WriteLine are here to view when things happen, you would usually not use it in your production code. Logging this would not provide much benefits since all the events are already stored in the EventStorage.

Using this handler, the IBookStateQuery will be up to date with current Command Context state. In an asynchronous environment, this is where eventual consistency is introduced.

We will also add a service that will notify when a user returned a book too late :

    class LateReturnNotifier :



        public void Handle(BookReturned @event)


            if (@event.Late)


                Console.WriteLine("{0} was late", @event.By);




Here again, no domain logic, we just do the infrastructure stuff, usually sending an email or a SMS.

View it in Action

    class Program


        static void Main(string[] args)


            ISessionFactory factory = new SessionFactory(new EventStorage());                IBookStateQuery query = new BookStateQuery();


            DomainEvents.RegisterHanlder(() => new BookStateHandler(query));

            DomainEvents.RegisterHanlder(() => new LateReturnNotifier());


            var bookId = BookId.NewBookId();

            using (var session = factory.OpenSession())


                var books = new BookRepository();

                books.Add(new Book(bookId,

                   "The Lord of the Rings",







            using (var session = factory.OpenSession())


                var books = new BookRepository();

                var book = books[bookId];


                     new DateTime(2009, 11, 2),









            using (var session = factory.OpenSession())


                var books = new BookRepository();

                var book = books[bookId];

                book.Return(new DateTime(2009, 11, 8));








            using (var session = factory.OpenSession())


                var books = new BookRepository();

                var book = books[bookId];


                      new DateTime(2009, 11, 9),









            using (var session = factory.OpenSession())


                var books = new BookRepository();

                var book = books[bookId];

                book.Return(new DateTime(2010, 03, 1));







        private static void ShowBooks(IBookStateQuery query)


            foreach (var state in query.GetBookStates())

                Console.WriteLine("{0} is {1}.",


                       state.Lent ? "lent" : "home");



We start by instantiating storage for the command context (the ISessionFactory) and the query context (the IBookStateQuery). In production you’ll use persistent storages (a persistent event storage and a RDBMS). I highly recommend using a Dependency Injection Container for real size projects.

Then we wire the handlers on domain events.

The application can start.

  • We register a book in the library.
  • We lend it to Alice on 2009-11-02 for 14 days
  • She returns it on 2009-11-08, she’s on time
  • We lend it to Bob on 2009-11-09 for 14 days,
  • He returns it on 2010-03-01, he’s late

The output is the following :

The Lord of the Rings is home.    // written from state

Book lent to Alice                // written by the book state handler

The Lord of the Rings is lent.    // written from state

Book returned by Alice            // written by the book state handler

The Lord of the Rings is home.    // written from state

Book lent to Bob                  // written by the book state handler

The Lord of the Rings is lent.    // written from state

Book returned by Bob              // written by the book state handler

Bob was late                      // written by the late return notifier

The Lord of the Rings is home.    // written from state

We have here a clear separation between Command that handles the domain logic and Query that handles presentation logic.

Have fun. Questions and remarks expected !

Wednesday, October 28, 2009

Uniqueness validation in CQRS Architecture

This is a short follow up on Bjarte’s Post.

There’s an important thing to consider when needing set validation : why ?

Why do these things need to be considered together and cannot just be handled separately ?

We can distinct two different parameters in uniqueness, Cardinality and Scope.


There are mainly two types of cardinality :

1 Cardinality

Only one employee can be the boss.

The model could provide a IsBoss property on every employee… But constancy would be very hard to achieve, especially in a CQRS architecture.

We should read the preceding rule as :

The company has only one boss. The boss is an employee.

Now, we can model a Boss property on the Company Aggregate Root that will reference the employee that is the boss. Changing the boss can now be an atomic and consistent operation.

We can see that we had to introduce a upper level to manage it (we’ll se it in the Scope section).

n Cardinality

Employee should have different user names.

We can clearly see here that user names must be different because they’ll act as identifiers. This is the goal of almost any uniqueness constraint. The property will be used as a key in a lookup.

The 1 (or 2 or 3) cardinality also act this way. It’s a way to tag an entity. You can ask “who is the boss ?” and get the answer by a simple lookup at the Boss property that acts like a bucket in a hash table.


There is no such thing as global scope

Even when we say, “Employee should have different user names”, there is a implicit scope, the Company.

Even when we say, “You Id Card number should be unique”, understand, “at the Country scope”.

Even when we say, “Your DNA should be unique”, understand, “At our life understanding scope”.

Find the scope and see the volume of data whose uniqueness should be enforced.

As we said, properties that have a uniqueness constraint are usually used as lookup values to find those entities. As such they rarely take part in the child entity domain logic.

Instead of having a UserName property on the Employee entity, why not have a UserNames key/value collection on the Company that will give the Employee for a given user name ?

If the expected Employee count is expected to be in a limited range, this is the most appropriate solution.

If the number can grow, loading it in memory on each Company hydratation is a bit heavy, so keep the directory on disk (using a table with a unique key in a RDBMS as suggested by Bjarte) or any other way that provide the expected performance.


In every case, when a uniqueness constraint appear on a property, the property does not belong the the entity itself but should be viewed as a key to access the entity from the upper level scope.

Do you have examples that cannot be solved this way ?

Friday, July 24, 2009

DDD and Code ReUse

I read several discussions against Code ReUse and layered architectures recently :recycle-logo

Different kinds of Code ReUse

You can split your code with different concern :

  • Object model extensions
  • Technical Infrastructure
  • Application code

The first two are good candidates for Code ReUse.

By Object model extensions I’m talking about things that make your code writing less tedious at language level or object model level.

Example of such code are :

  • IEnumerable and Enumerable
  • Collections
  • Reflection helpers
  • Dependency Injection framework

By Technical Infrastructure I mean things that make your code run in its environment :

  • Generic Service Host,
  • ORM, Data Layer
  • Format serializers / deserializers
  • Configuration helpers
  • Communication frameworks (WCF, Service Buses)
  • UI frameworks (MVC, WPF)

The last part is Application code, and here, things are really different.

Application Code ReUse

For long I’ve been writing business code in libraries. I began then to notice problems concerning code and data locality.

When you have a single application (process), no problem.

But if two applications need to modify the same entities, the solution would be to use the same library in both applications so that there is no code duplication. It seems good practice but you quickly stumble on several problems – I’m sure you already experienced it :

  • Synchronization : the same data will be accessed in the same db from two application, how do you manage conflicts
  • Deployment : when you fix bugs or add features, you must redeploy every application that has a dependency on the library. It slows down the release cycle and make it more risky, changes have more impact.
  • Code locality : when a problem arises, you have to find which application it comes from.

Let’s examine DDD patterns to see how they fit with reuse :


Let’s start easy. Services are stateless, they should deliver simple action. But to preserve encapsulation the better is to put services as true services in their own process (web service, windows service, service on a service bus..).

This way, synchronization is managed in process, deployment is a breeze, and no problem with code locality – code executes in one place.


Entities are retrieved through a Repository Service, hence, they should follow the same rules as Services.

This way, the implementation of a repository that access the database is truly an implementation details. Anyone who wants to talk to an entity sends a command to it, a handler service get the entity from the repository, and pass the command to the entity. The configuration to access the database is local the to the process.

Here again, same benefits.

Moreover entities should always have different meanings in different bounded contexts, the should have different implementations, so there is no real reason for reuse.

Value Objects

Value objects are a bit different.

Some object are very specific to a bounded context and don’t need to be reuse outside.

Some other can be a good way to encapsulate some shared concepts. Money is usually a good example, but there can also be concepts more specific to the domain (you will find them as words that come in the Ubiquitous Language of different Bounded Contexts).

They can be shared among different contexts, but rarely between different domains. There are exceptions for very very generic concept like money, but even money often needs to be tweaked a bit for each domain…


Service Bus to the rescue

Once each bounded context is split, you need to organize communications between parts. Here comes the Service Bus and Messages, but now, the only shared parts in the application are :

  • Object model extensions (to code faster and cleaner)
  • Technical infrastructure (so that each process is fully equipped, and there’s not much technical fuss in the application code)
  • General use Value Objects (to manipulate domain concepts instead of int and decimal)
  • Messages (to communicate between contexts)

You could also use web services, but it makes the overall structure less fault tolerant, harder to deploy, and more tightly coupled.

Once you’ve decoupled bounded context using messages, the rest is just an internal implementation detail, why would you want to reuse it !

Tuesday, June 23, 2009

Distributed Domain Driven Design and Aggregates

Once again, Gojko Adzic comes with an excellent post : Improving performance and scalability with DDD.

Aggregates are often a bit underused in DDD because they’re difficult to grasp. They’re often seen as a solution to a technical problem. Gojko shows here how to understand them at a domain level. This post gives a clear vision of the role of Aggregates in DDD by placing it in the context of distributed environments.

You should read it !

Friday, June 19, 2009

Strategic Design at DDD Exchange

Gojko Adzik has a post about Eric Evans’ talk at DDD Exchange :

Why do efforts to replace legacy system fail ?

You can also read my previous post about strategic design.

I’m currently working on evolving a large legacy system, and my experience tell me it’s the right way to deal with it !

Don’t try to switch off legacy system. Go along with it using anticorruption layers to protect you elegant core domain.

Wednesday, June 17, 2009

Which model is best ? Thats not the question.

Mercator_1569Those who already read the book should already know this. It’s still interesting for newcomers and you can  send comments if you want !

During his Talk at ParisJug, Eric Evans presented two possible models for Cargo itinerary.

The first one was around the notion of Stops (at a Stop, you unload, then you load), and the second one was around the notion of Legs(you load at the start of a Leg, and unload at the end). An itinerary could be seen as a list of stops or a list of legs.

The question was, which model is best ?

Of course, there is no answer to this question.

The same question was translated to a comparison of maps. First an map of China from the 16th century, and a Mercator projection map.

We should be inclined to say that the second one is best. But the first one was largely enough for the need at that time. And if you inspect the Mercator projection, you can notice that it is not that accurate for some tasks. For example, the Greenland seems abnormally large.

Why do we use Mercator projection then ?

It became the standard map projection for nautical purposes because of its ability to represent lines of constant course, known as rhumb lines or loxodromes, as straight segments.

If you want a map to compare country areas, use a Gall-Petters projection or a  Goode homolosine projection

So the question becomes :

Which model is more useful ?

And for the question to be complete :

Which model is more useful for what ?

To come back to the cargo application, Stops will be useful to produce orders to unload and reload containers from cargos, but legs will be useful if you need to track transport location or change routing during transport.

You’ll have noticed :

It depends on the context

When DDD should be considered ?

This is a recurring question on the DDD yahoo group. And there was a simple explanation during the ParisJug talk.

DDD is not a silver bullet for all application development, it just helps to manage complexity when the complexity comes from the domain.

No need for DDD when working on a technical application or a small application with few interactions.

You could benefit from DDD when your application looks like the Cargo sample :

  • Route containers based on transports availability
  • Take cost and time into account
  • Know where are the boats
  • Organize loads and unloads
  • Manage container storage (emit order of missions for employees on site)
  • Provide container tracking and tracing to clients
  • Transports can be late, manage it
  • Transports can be canceled, manage it
  • Contracts can be changed, destination can change
  • Containers can be incorrectly routed even if emitted orders where correct, manage it.
  • Manage taxes
  • Manage time zones
  • Manage currencies
  • Manager constraints and local rules on dangerous containers contents

In this kind of application, the complexity doesn’t come from an Xml web service or a database schema. Even without taking account any technical concern, it is complex !

So there is a simple rule of thumb to know if DDD could apply (independently from the size of the project) :

Try DDD if Domain Complexity >> Technical Complexity

In other case you can just go with your preferred classic architecture.

Tuesday, June 16, 2009

Met Eric Evans at ParisJug

IMAG0387 The ParisJug organized a DDD event yesterday in  Paris presented by Eric Evans, the author of Domain  Driven Design himself.

He’d come in France ten years ago, but never made a presentation about Domain Driven Design here yet.

Thanks to Antonio and Nicolas and the others and who organized this presentation in 4 days. You can find a summary of the talk in French here.

Putting the model to work

It was the title of the talk. I’ll not make a full report since it was mainly what’s in the book for those who had not grasped the concepts of DDD yet.

I’ll talk about important topics of the content in following posts.

Diner with Eric Evans and Jug guys

After the presentation, we moved to a restaurant with Eric Evans, the organizers and other attendees. I had the chance to be at the same table as Eric, so I had the opportunity to talk with him about a lot of things.

He’s not at all the Pattern Guru kind. Very careful to let you understand he’s not found a solution to your problems, that you’ll have to work, but that his experience and analysis can help to grasp things a bit more clearly. He’s constantly working hard on several project to get more experience and set his knowledge and experimentation against different contexts.

I’ll also talk about part of our discussion in following posts.

Tuesday, May 19, 2009

IOC Container, Go Hide (part 2)

Ok, there seem to be some misunderstanding with the suggestion from my previous post.

A framework manipulates two kind of objects :

  • its objects
  • your objects

You don’t care about its object, you care about yours.

So you want to be able to take part in your objects instantiation so that you can do your own stuff (inject dependencies, handle differently based on runtime behavior…).

Without container

Look at, when it needs to create an Http Handler, it calls a IHttpHandlerFactory.

You can provide your own IHttpHandlerFactory, just implement its two methods (the second one can even be left empty in most cases) :

public interface IHttpHandlerFactory
// Methods
IHttpHandler GetHandler(HttpContext context, string requestType, string url, string pathTranslated);
void ReleaseHandler(IHttpHandler handler);

In the GetHandler method, you’re free to use the container you want ! But you can also do this :

return new CustomHttpHandler(theDependency);

Or make a call to another factory, or… or…

With container

The framework ask to provide a container so that the it can simply run. Ok there is also a builtin container so I don’t have to care.

But if I want to do a custom instantiation I have now to implement the following interface :

public interface IServiceLocator : IServiceProvider
// Methods
IEnumerable<TService> GetAllInstances<TService>();
IEnumerable<object> GetAllInstances(Type serviceType);
TService GetInstance<TService>();
TService GetInstance<TService>(string key);
object GetInstance(Type serviceType);
object GetInstance(Type serviceType, string key);

This interface will be used when any instantiation will occur. If I mess internal framework instantiation. Wooch !

And there is no clear meaning with this interface. It should be able to instantiate any object requested.

It doesn’t give a clean and clear API in my personal opinion !

Using an IOC container as an extension point is clearly an over abstraction !

Provide clear, focused extensibility points

The framework doesn’t need to be extended on every instantiation, especially when instantiation its own internal stuff. There are clear extension points, and a specific interface should be created for each.

This is just a framework design good practice.

Then, there is no need to show the container you use to the outside, and it resolves potential version conflicts.

Friday, May 15, 2009

IOC Container, Go Hide !

558287_49024807While testing NServiceBus and MassTransit – yes I need a service bus framework for my current project – I’ve seen that both library where relying on an IOC container, in two different ways.

Warning: This article is not to flame these two frameworks that seems of great quality. There are still few guidance on using IOC containers in libraries. This is the topic of this post.

The NServiceBus way

NServiceBus relies on Spring or Castle Windsor.

You can notice it when instantiating the Bus :

var bus = NServiceBus.Configure.With()
                .SpringBuilder() // or .CastleWindsorBuilder()

And when looking at the library with Reflector :




Yes, the Spring framework and the Castle.Windsor are ILMerged in the NServiceBus assembly.

NServiceBus abstracts the container with the NServiceBus.ObjectBuilder.IBuilder interface :

public interface IBuilder
    // Methods
    T Build<T>();
    object Build(Type typeToBuild);
    IEnumerable<T> BuildAll<T>();
    IEnumerable<object> BuildAll(Type typeToBuild);
    void BuildAndDispatch(Type typeToBuild, Action<object> action);


The MassTransit way

MassTransit adopts a slightly different strategy.

The base is still the same.

It uses the CommonServiceLocator to have a ‘standard’ interface to hide the actual IOC container implementation.

It provides implementations for the most common IOC frameworks (Castle.Windsor, NInject, StructureMap and Unity – but it doesn’t work so well…) through additional dlls.

The big difference is in the library configuration. You configure the container (through code or configuration). Then encapsulate the container in a Common Service Locator implementation that acts as an adapter. Finally give it to the library.


What’s the problem

In both case, the intent is good, but hell is paved with good intentions.

In Mass Transit, the design is clearly made so that you can choose your container and integrate the library seamlessly with it. You can manage the configuration in your container the way you do it in your application.

But wait ! What if I don’t need an IOC container in my application ?

The other problem is that Mass Transit relies on some advanced IOC capabilities like contextual configuration. The object instantiated for IEndPoint should not be the same depending on the parent object. This scenario is not handled by Unity for instance.

Maybe Unity is not good enough, but how can I know which other specific feature Mass Transit relies on ? No clue.

And providing a library configuration through a container doesn’t seem a best practice to me. The API gives no clues of what I should provide to the library in order to run it.

The only way to know is to launch it, see where it throws an unresolved dependency exception, add the dependency and retry !

And I’ll probably never know about optional dependencies.

On the other side, NServiceBus works with a NServiceBus specific configuration (code and app.config) that indicates  clearly what I must provide to the library.

But Jak Charlton had a serious problem with NServiceBus. He’s not using the same version of Castle.Windsor that the one merged in the NSB assembly ! And the assembly load fails.


What’s the solution then ?

I clearly prefer the specific configuration scheme of NServiceBus, but how can we solve the version problem ?

I will answer with another question :

Why does NServiceBus need two IOC container implementations ?

For library creators, I will propose this way to go :

  • Choose the container that provides the features you need
  • Use it in your infrastructure
  • Create a clear configuration model that exposes the required and optional dependencies that should be provided by the library user
  • Consider creating a app.config specific configuration (there are good tools in the framework for that)
  • ILMerge your container framework as internal in your assembly.

The alternative to ILMerge is to fork your framework (if it’s open source) and put it as internal directly in your code.

The advantages

  • No conflict with potential other versions of the container framework
  • A clear discoverable configuration
  • No need to use a IOC container to use the library.

What if the container needs to inject dependencies in the user objects ?

Both NServiceBus and MassTransit instantiate user’s objects on the fly.

How can the user add it’s own dependencies if he has no access to the container ?

Let’s step back a little and consider what we would do if there was no container…

  • We would use Activator.CreateInstance to create the object.
  • Then we would consider it would not let the library user enough options, so we would propose a hook so that the user can manage the instantiation himself. It could be a callback or an interface.

When instantiating user objects with the internal framework IOC container, you remove to your users the right to manage the instantiation themselves.

So come back to this good practice. If the user wants to use a IOC container to instantiate his objects with dependencies, let him do this his own way. And his container will not be loaded with all the framework internal dependencies, this will avoid other conflicts.


Hide your IOC container framework inside your library, it’s a private implementation detail of your framework and we don’t wanna know !

Choose the framework you like, hide it so that it cannot conflict with the one I want to use and we will be friends !

It surely advocates for frameworks with a small footprint, but once again, it’s a private detail.


Continued on IOC Container, Go Hide (part 2)