Think Before Coding

To content | To menu | To search

Thursday, January 20, 2011

Code code code

There are some sample/proof-of-concept codes on code.thinkbeforecoding.com (hosted by bitbucket.org)

 

You can have a look at it and contribute if you want.

 

Enjoy !

Tuesday, October 19, 2010

Duck : Delete Update Create Killer

Duck sign, Stockbridge High StreetI recently had a remark from Frederic Fadel from Aspectize, telling me about Event Sourcing something like:

Why would you like to write SQL to write data to your read model when our product can do it for you ?

I acknowledge that their product is fancy. You simply declare your db data schema, your UI and services and bind it all together.

But it doesn’t fit well with CQRS and Event Sourcing. And I want to do Event Sourcing for domain and business reasons, not technical reasons.

But he was write on this point :

I don’t want to write SQL to denormalize my events to my queryable storage.

What are my options ?

  • Writing SQL by hand, but testability is low, and you’ll get a mix of C# to get data from the events, and SQL for the Update queries.
  • Using an ORM. When doing NHibernate you don’t really write SQL. Testability wont be great anyway.

The problem with ORMs

ORM are usually better at getting data than at changing it. You can do it, but let’s look at what happens.

The ORM loads data from your Db into entities that will be tracked by an identity tracker. Then you change the values in memory . Then the ORM will find what needs to be sent back to the server and make a query to the Db so that the change happens.

But what I need to do is a lot simpler. Just emit some INSERT, UPDATE or DELETE based on current table values and event data.

With an ORM, what happens if the data is changed between loading and saving ? I’ll have to manage some versioning and/or transaction. And I’ll make two roundtrips to the server needlessly.

Here comes Duck

Duck is a kind of ORM oriented toward Delete Update Create.

Don’t ask Duck to load data in memory, it simply can’t.

You simply express how data should change based on current row content and values that you’ll pass.

It avoids the first roundtrip to the database, and make shorter code to express the change.

Let’s see how to use it

First, you should declare a class that has the structure of your table with public get/set properties, and marked with a Table attribute :

     [Table]
class Species
{
public Guid Id { get; set; }
public string Name { get; set; }
public string BinomialName { get; set; }
public bool IsEndangered { get; set; }
public int Population { get; set; }
}

It contains current observed species at an observatory.

Then a simple new statement, let’s say that a new species has be registered at the observatory :

     var duck = new DuckContext'(connectionString);
var speciesId = Guid.NewGuid();
duck.In<Species>()
.Insert(() =>
new Species
{
Id = speciesId,
Name = "Mallard",
BinomialName = "Anas platyrhynchos",
IsEndangered = false,
Population = 50
});

Nothing tricky here..

The observatory noticed a population decay, the species is endangered :

     duck.In<Species>()
.Where(r => r.Id == speciesId)
.Update(r => new Species {
Population = r.Population - 40,
IsEndangered = true});

Here, the use of the current value of Population will not load current value. It will the following statement :

UPDATE Species
SET
    Population = Population - 40,
    IsEndangered = 1
WHERE
    Id = @p0

I chose  to create a new Row from the old one rather than change the original one for two reasons :

  • It makes rows immutable and you don’t have to think about execution order between fields assignments. It’s the way SQL works
  • Linq Expressions cannot represent statement blocks and assignments in C#3, Duck would have been .Net only…

The –40 is directly in the query here because we used a constant. I we where using a variable, query would contain a parameter

Now the species has disappeared, it should be removed from the observed species (though it could be just an update somewhere else) :

     duck.In<Species>()
.Where(r => r.Id == speciesId)
.Delete();

Testability

To run your test you just have to use the InMemoryDuckContext… you have then access to Table<T>() that’ll enable you to set up your data and verify after execution that things changed as expected. I’ll talk a bit more about it soon.

Try it now, it’s OSS

You can grab the code at bitbucket and try it now :

http://bitbucket.org/thinkbeforecoding/duck

It’s in F# ! Writing a AST analyzer in F# is far more easy, concise and expressive than in C#. You’ll just have to reference Duck in you project, there’s no direct F# dependency.

Next episode will be about how to mix it with Rx (Reactive Framework) to declare your event handling logic.

Hope you like it, and don’t hesitate to give feedback and suggestions.

Monday, June 14, 2010

DDD Exchange 2010

I could not attend to this year’s edition that seemed really great with Eric Evans, Greg Young, Udi Dahan, Ian Cooper and Gojko Adzic.

The videos from the events should be soon somewhere around here.

And you can already find transcripts of the talks on Gojko ‘I post faster than people talk’s blog :

Eric Evans: Domain driven design redefined

Udi Dahan: the biggest mistakes teams make when applying DDD

Greg Young :Evolution of DDD: CQRS and Event Sourcing

 

If you also missed it, don’t make the same mistake next year, and register now for £50.00 (instead of £250.00) until the end of the week.

Sunday, April 25, 2010

Event Sourcing and CQRS, Events Deserialization

So we have our events serialized in our event store. Deserializing events is not an issue, until we start to make them evolve and need to manage several versions.

Since we never modify what has been log, we’ll have to deal with old versions anyway.

A simple way to do it is to maintain every versions of the events in the projects, and make the aggregate root accept all of them. But it will soon charge the aggregate root with a lot of code and will make it bloated rapidly.

This is why you can usually introduce a converter that will convert any version of the event to the last one (usually you provide methods to update to next version, and iterate until last version so that this part of the code is incremental). This is a convenient way to address the problem, but you still have classes v1, v2 … vn that you keep in your project only for versioning purpose even if you don’t use it anymore in your production code.

Events as documents

519485340_1a83117720_o[1]It is easy do deserialize an event as an object or a document, you only need to split two responsibilities in you deserialization process :

  • Stream reading
  • Object building

The deserializer will be in charge of reading the data, it reads the bits, and get the meaning from context, it will tell the Object Builder about objects types, fields names and value.

On its side, the ob ject builder will instantiate the objects, set fields values depending on names.

You can provides two distinct Object Builders. The strongly typed one will instantiate concrete .net types and set fields using reflection. The document builder one, will instantiate objects that will only be property bags.

When deserializing an event in its last version, you can use directly the strongly typed one, but when reading an previous version of the event, you can deserialize it as a document and give it to the converter.

The converter will then add/remove properties from the document to make it up to date, and the document will be used to create a concrete .net type of the last event version.

Here the process is quite the same, you should provide a document reader that will use the strongly typed object builder to instantiate the event.

There’s no need to keep every version of you Event Classes now since you can manipulate old versions as documents.

Using dynamic in C#4

Document manipulation can make things a bit messy since it can be hard to understand the original structure of the object. This is where you can use the DLR DynamicObject class to make the property bag (the document) a dynamic object that you’ll be able to use as any standard .net object.

This way, in the converter you can manipulate old versions of the events as .net objects without having to keep all those old classes that won’t be used anymore.

Saturday, April 17, 2010

Event Sourcing and CQRS, Bounded Contexts

Once again, I prefer a new post that a long comment reply. This one is about a important concept of Domain Driven Design, Bounded Contexts.

Hendry Luk asked :

Just 1 question, you represent borrower in events as a simple full-name string.
Is there any reason or just for sake of simplicity for example?
Supposed I'm using borrowerId, how would that work in other BC, say
LateBookNotifier (let's assume its a separate BC). How does this BC shows the
name of the borrower? Does it communicate directly with command BC using ACL?
Or does it also subscribe to BorrowerRegistered event as well (hence every BC
would have duplicate data of each of the borrowers, just like they do each of
the books)?

The short answer is ‘Yes, it was just for sake of simplicity’. In a real world scenario, borrowers would probably be entities, and thus would have an identity. I would even probably be an Aggregate Root.

The Borrower Aggregate Root would encapsulate state needed to perform commands on this Aggregate.

Bounded Contexts CommunicationsBooks

I can see the following contexts here :

  • Inventory : Manage books availability and state (the book has been damaged, there a notes written on it etc..)
  • Relationships : Manage contact by email, phone with borrowers, and tracks the care they take to your books, if they return it on schedule.

Since we are using CQRS (and even more, Event Sourcing), aggregates in these context don’t need more state that what’s needed to take decisions,

So a Book in the Inventory Context will probably not need more that the Id of the borrower and the date a witch it was borrowed.
We can then call the ReturnToShelf command on the Book that will publish a ReturnedLateToShelf { Book : bookId, By : borrowerId, After : 20 days, LateBy : 6 days  }.

A Handler at the Relationships Boundary will catch the event, and call a CheckExcuseForLateReturn on the Borrower Aggregate Root (based on its id). The command will check the borrower’ss record to see if its acceptable. It will simply publish a LateReturnGentlyAccepted if the borrower is usually in time, but will publish a KindnessLimitReached in the other case.

Another handler will catch it, and call SendAngryMessage on the Messaging Service. The role of the Messaging Service is to tweet borrowers to let them know they should not forget to return your books. How does this service know the twitter account of the borrower ? When the handler (the one that call SendAngryMessage) catches a BorrowerRegistered event or a BorrowerTwitterAccountChanged message, it says so to the service that can maintain a list of accounts in any desired storage (SLQ, NoSql, in memory.. ?). The SendAngryMessage can now tweet ‘Hey you filthy @borrower, you better return my book today or I shall share all the pics from your last party…’

Done.

Where does data live ?

There’s usually a huge concern about data duplication in all contexts. Is the info duplicated in so many places ?

There will be two main places :

  • The Persistent View Model used to see and edit borrower’s details
  • The Persistent View Model used by the messaging service to Query borrower’s twitter accounts. Here, no other borrower’s data is needed except its id and account name.

The Borrower Aggregate Root and Book Aggregate Root in the two main Domain Bounded Contexts will not need to keep track of this kind of data. They won’t need it in their decision process.

If you pursue this idea, to answer further to Leonardo, you’ll notice that strings will probably never been used as state inside Domain Bounded Context. They can appear as identity key, or just pass through a command and be republished in the following event. But since strings are rarely – if never – a good way to represent information on which you’ll have to take a decision, it should almost never be stored in an aggregate root current state. This is another reason why most domain models can fit in memory, because names, descriptions and other documents usually represent the biggest part of the data in a system, the remaining data is usually small. These documents and names are useless to run domain  internal logic (except validation rules, but not state change rules) so they can simply be logged in events and persisted in the Query’s View Models. Only state needed to take state change decisions will stay in memory.

Thursday, February 25, 2010

Event Sourcing and CQRS, Snapshots !

Leonardo had a question about reloading huge amounts of events.

It’s true that some Aggregate Roots have very long lifetimes with lots of events, and it can become a problem.

 

There are two things involved to resolve this problem :

Snapshots

Ok, the philosophy of event sourcing is to store changes instead of state, but we’ll still need state in our Aggregate Roots, and getting it from scratch can be long.

Take a snapshot every n events (you’ll see that n can be quite high), and store it alongside events, with the version of the aggregate root.

 

To reload the Aggregate Root, simply find the snapshot, give it to the Aggregate root, the replay events that happened after the snapshot.

 

You only need the last snapshot for each Aggregate Root, no need to log all passed snapshots.

 

When you want to change stored state in an Aggregate Root, you won’t be able to used last snapshot since it will not contains expected state. But you can still replay events from scratch when it happens, so you have no loss, and simply take a new snapshot with the new state.

 

In memory domain

Usually with an ORM, you reload entities from the storage on every unit of work.

 

But in the case of Event Sourcing, your Aggregate Roots only need to retain state that will be used to take business decisions. You’ll never query state from Aggregate Roots. A large part of the entity state and especially the part that has the biggest memory footprint is usually stored only for queries, like names, descriptions and things like that.

 

In an Aggregate Root in an Event Sourcing environment, a name or description can simply be checked for validity, put in an event, but don’t need to be kipped in the in memory entity state – the Aggregate Root fields.

 

You’ll notice that your big domain state can fit in memory once you’ve trimmed it this way.

 

Now that your model is in memory, no need to reload every events on each unit of work. It happens only once when the Aggregate Root is needed the first time.

 

Well see soon how you can use this to make your event serialization even faster to have very high business peak throughput.

Thursday, December 10, 2009

Business Errors are Just Ordinary Events

Error handling has always been something quite difficult to grasp in software design and still is.

Exceptions are now widespread in languages, and it helps a lot to manage corner case where something fails badly.

But should we use Exceptions to manage business errors ?

The business errors

What do we call business errors actually ?

Broken Invariants

What if an invariance rule is broken ?

The situation should never happen : There is a bug. A bug is not a business error, correct it and deploy.

The situation can happens sometimes : This is not an invariant, but a rare state. It should be handled as any other state change.

Invalid commands

What if we receive an invalid command ?

The command data is meaningless : There’s a bug, you should always validate that command data is not just garbage.

The command leads to an invalid state : The user nonetheless requested to perform the command.

In this case the event will be ‘the request was rejected’. The event can be handled by sending an email back to the customer, or a support request can be started so that the support can call the customer and manage the problem. All this is part of the business process anyway.

Corner cases create business opportunities

I can often see discussions around account validation for credit, to make the transaction fail when your account goes below zero.

But it’s not what’s happening in real life. Transaction is accepted, then the bank charges you because your account is in the red zone.

I’m currently working in the hotel business. When a booking is received and  there’s no room left, should I reject the booking ? Another client can cancel soon, or I can move the customer to another hotel nearby, but just saying ‘there’s no room left’ is not a good business answer ! Overbooking management has even become a strategic practice in the business.

To fully manage your customers you should embrace the whole business lifecycle in your system. This includes support and corner case management. Part of it will be done by hand, other part automatically, but you should not just report an exception is a trace log.

These critical situations are usually the one in which you customer needs you more than in any other case, you should design your fault handling strategy with care and make it a full concern of you business.

Udi Dahan's post on CQRS

Udi Dahan wrote a new post on CQRS today : Clarified CQRS

It is essentially the content of the presentation he gave here in Paris and in other places.

You should read it I you want to understand the deep reasons to use CQRS and see how to change your mind to use it.

Tuesday, November 17, 2009

Udi Dahan talks on CQRS in Paris

Udi Dahan gave a very good talk yesterday evening at Zenika, there was only few attendees… perhaps because it was on a Monday evening. Whatever, there was barely not enough place already in the Italian restaurant where we moved after.

I won’t make a full report, just talk about some interesting points.

First of all, the session focused mainly on why you should do CQRS and not how. Second point, the talk was not about event sourcing, but you already now that you can do CQRS without event sourcing.

Something we should accept : Stale Data

The paradigm of usual architecture’s best practice has a serious flow : when you show data to your users, it’s already stale.

Is it important ? Yes.

Is it a problem ? Not really.

The world have worked with stale data for years, and it was handled rather gracefully until now. Computers have reduced the time span, but when the data appear on the screen, it’s stale.

Tel it to your users, they will accept it. Find with them what is acceptable. 1 second, 10 seconds, 1 minute, 1 hour, 1 day ? The users are used to it in there own business. Do it too.

Queries

What’s the purpose of queries ? To show data. Not objects.

So why should the data from the database come across 5 layers through 3 model transformations ? It’s a bit overkill to display data.

Why not just this : The UI read data from the database and displays it ?

No DTOs, no ORM, not business rules executed on each query.

You simply define a Persistent ViewModel (thank’s Udi, I like this description of the Q side), and display it directly to screen. It should be as simple as one database table per UI view.

Of course you need a way to keep the Persistent ViewModel up to date, but we’ll see that later.

Commands

On the other side, there are commands.

It should be done in 3 phases :

Validation

Is the input potentially good ? Structured correctly, no missing field, everything fit in ranges ?

This can be done without knowing current state, and be done outside of entities command handling.

Rules

Should we do this ?

Here, the decision is taken using current state.

It leads to a discussion about UI design. In order to handle the user command as well as you can, you have to capture the user intent in the command.

In CRUD applications, the new data is sent by the UI layer. You have to extract the user intent from that data to know if you can process the data.

There is a huge difference between UserMovesToNewAddress and CorrectTheMisspellingInUserAddress from a business point, but in a CRUD application you would probably end with the same Update data…

State change

What’s the new state ?

It’s the easy part once the rules are applied.

Domain Model

What aren’t they for ?

Validation : commands are validated before the model is called. Do not bloat your domain model with this.

Queries : entity relationships for reading are unnecessary. You can do eager loading on your Aggregate Roots safely, they’ll never be used for queries that need only partial information.

What are they for ?

Answer to the question : should we do what this valid command is asking ?

If the answer is yes, change the state !

Maintain the query model up to date

There are two main ways to maintain query model up to date.

You can use something like views or ETL to transform data from the domain data to the shape required by the query side.

If you prefer, or when your domain persistence is not compatible with this option (OODB, Event Storage..), you can publish events from you command side, and provides handler’s on the query side that will maintain the views state in the relational database (or a cube… or whatever). A denormalization will happen here.

What do we gain from this ?

Asynchronous model

The model is deeply asynchronous, it’s not a matter of tweaking things with threads. It’s asynchronous from the ground up, at domain level.

Your user sends a command, and your design is good if you can answer : “thank you, we will come back to you soon…”. Take the time needed to fulfill your user wish, he will be happy !

Scalability

By relaxing the rules, the system becomes more scalable.

Domain persistence choice

The domain is accessed only to process rules and state changes. There is no need to join tables, filter rows. So you can easily use an non relational database.

Possible options are a OODB or an Event Storage (for event sourcing).

You can still use a RDBMS with or without an ORM if you’re more familiar with these technologies.

But the persistence mechanism becomes an implementation detail from from Command side that will not interfere with your queries.

Conclusion

Ooops, I said it was not a complete report… but it actually is. Every point was interesting ?

After the talk we had a discussion about forecasting and other interesting subjects. Perhaps more on this later.

There was a video camera in the room, so I think the guys from Zenika will try to put it on the internet when they have time. I’ll add the link when available.

If you was here and have a picture of the event, I would be glad to put it in the blog :D

Monday, November 16, 2009

Udi Dahan talks on CQRS at Zenika

I’ll be at Udi Dahan’s talk this evening (19h) at Zenika in Paris.

Tell me if you’re planning to be there too !

I’ll surely post about it in the following days.

Thursday, November 5, 2009

Event Sourcing and CQRS, Serialization

Be sure to read the three preceding parts of the series:

Event Sourcing and CQRS, Now !  
Event Sourcing and CQRS, Let’s use it
Event Sourcing and CQRS; Dispatch-options

Today, we’ll study to a required part of the event storage : Serialization/Deserialization

The easy way

The .Net framework as several serialization technologies that can be used here, Binary serialization, XML serialization or even DataContract serialization introduced with WCF.

The penalty

The particularity of Event Sourcing is that we will never delete or update stored events. They’ll be logged, insert only, once and forever.

So the log grows. grows. grows.

Event storage size will influence greatly the growth rate of the log.

Xml Serialization

If your system processes frequently lots of events, forget about XML. Far to verbose, you’ll pay the Angle Bracket Tax.

Binary Serialization

But the binary serialization still cost much, even if compact, it will contain type names and field names…

Raw Serialization

You could write serialization/deserialization code into your type.

The type can chose a format, so no extra type/field name is needed. This kind of serialization is very compact – it contains only required bits – but you cannot read data back without the deserialization code.

It can be ok if you plan to have a definite small number of well documented events. Unmanageable if your event type count will grow with time and versions.

Avoid it

Let’s consider how data are stored in a database.

A database contains tables. Tables have a schema. When storing a row, no need to repeat column names on each cell. The data layout is defined by the table schema and will be the same on each row.

We cannot do the same since events have different schemas, but we work with a limited set of events that will occur many times.

Split schema and data

We can thus store schemas aside, and specify the row data schema on each row. The event data will the be stored as raw bits corresponding to specified schema.

This way you can design tools to explore your log file with complete event representation without needing the original event class, and you got a very compact serialization. Have your cake and eat it too !

Stay tuned, the code comes tomorrow…

Tuesday, November 3, 2009

Event Sourcing and CQRS, Dispatch options.

As seen in previous post, I used dynamic to replay events.

The main reason to use it was to avoid long code using reflection in the infrastructure that would have made it hard to read.

I’ll show several ways to do this dispatch with pros and cons in each cases.

Dynamic

The proposed solution was using dynamic.

+ Pros : there is no reflection code involved, code is very simple.
- Cons : all state change (Apply) methods must have the same name.

I made no performance test, so I cannot judge if perf is better or not. It seems that the DLR has a rather good cache when the same type is encountered several time, but only measures can tell.

Handlers registration

This is the current implementation in Mark Nijhof’s sample.

The base class maintains a dictionary of Type/Action<T> association to dispatch events based on type.

Since an Action<T> delegate must have a target instance, the delegate must be constructed from within the instance, in the .ctor.

    public class AggregateRoot<TId>

    {

        readonly Dictionary<Type, Action<object>> handlers =

              new Dictionary<Type, Action<object>>();

 

        protected void Register<T>(Action<T> handler)

        {

            handlers.Add(typeof(T),e => handler((T)e));

        }

 

        protected void Replay(IEnumerable<object> events)

        {

            foreach (var @event in events)

                handlers[@event.GetType()](@event);

        }

        // rest of the aggregate root class

    }

Here is code that use it :

 

    public class Book : AggregateRoot<BookId>

    {

        private readonly BookId id;

        public Book(BookId id,IEnumerable<object> events) : this(id)

        {

            Replay(events);

        }

 

        public Book(BookId id,string title, string isbn) : this(id)

        {

            var @event = new BookRegistered(id, title, isbn);

            OnBookRegistered(@event);

            Append(@event);

        }

 

        private Book(BookId id)

        {

            this.id = id;

            Register<BookRegistered>(OnBookRegistered);

            Register<BookLent>(OnBookLent);

            Register<BookReturned>(OnBookReturned);

        }

 

        private void OnBookRegistered(BookRegistered @event) { /**/ }

        private void OnBookLent(BookLent @event) { /**/ }

        private void OnBookReturned(BookReturned @event) { /**/ }

    }

+Pros : Still no reflection,
            Meaningful method names
-Cons : Additional plumbing code, 
            Private constructor to avoid repetition
            Registration occurs at each instantiation

Convention Based Method Naming

This is the way advocated by Greg Young.

If your event is called BookRegistered, assume the method will be called OnBookRegistered, and find it by reflection. You can implement a cache at class level to avoid reflection on each dispatch.

 

    public abstract class AggregateRoot<TId> : IAggregateRoot<TId>

    {

        private static readonly Dictionary<Type, IEventDispatcher> Handlers =

               new Dictionary<Type, IEventDispatcher>();

        private static readonly object HandlersLock = new object();

 

 

        protected void Replay(IEnumerable<object> events)

        {

            var dispatcher = GetDispatcher();

            dispatcher.Dispatch(this, @events);

        }

 

        private IEventDispatcher GetDispatcher()

        {

            IEventDispatcher handlers;

            var type = GetType();

            lock (HandlersLock)

            {

                if (!Handlers.TryGetValue(type, out handlers))

                {

                    handlers = EventDispatcher.Create(type);

                    Handlers.Add(type, handlers);

                }

            }

            return handlers;

        }

        ... rest of the code here

    }

The dispatcher code :

    internal interface IEventDispatcher

    {

        void Dispatch(object target, IEnumerable<object>events);

    }

    internal class EventDispatcher<T> : IEventDispatcher

    {

        private readonly Dictionary<Type, IEventHandler<T>> handlers;

 

        public EventDispatcher()

        {

            var h = from m in typeof(T)

              .GetMethods(BindingFlags.Instance | BindingFlags.NonPublic)

                    let parameters = m.GetParameters()

                    where parameters.Length ==1

                    && m.Name == "On" + parameters[0].ParameterType.Name

                    select EventHandler.Create<T>(m);

 

            handlers = h.ToDictionary(i => i.EventType);

        }

 

        public void Dispatch(object target, IEnumerable<object> events)

        {

            var typedTarget = (T)target;

            foreach (var @event in events)

            {

                var handler = handlers[@event.GetType()];

                handler.Call(typedTarget, @event);

            }

        }

    }

 

    internal static class EventDispatcher

    {

        public static IEventDispatcher Create(Type type)

        {

 

            return (IEventDispatcher)Activator.CreateInstance(

               typeof(EventDispatcher<>).MakeGenericType(type));

        }

    }

and the event handler :

    internal interface IEventHandler<T>

    {

        void Call(T target, object argument);

        Type EventType { get; }

    }

    internal class EventHandler<TEntity, TEvent> : IEventHandler<TEntity>

    {

        private readonly Action<TEntity, TEvent> handler;

 

        public EventHandler(MethodInfo methodInfo)

        {

            handler = (Action<TEntity, TEvent>)Delegate.CreateDelegate(

                  typeof(Action<TEntity, TEvent>), methodInfo, true);

        }

 

 

        public void Call(TEntity target, object argument)

        {

            handler(target, (TEvent)argument);

        }

 

        public Type EventType

        {

            get { return typeof(TEvent); }

        }

    }

 

    internal static class EventHandler

    {

        public static IEventHandler<T> Create<T>(MethodInfo methodInfo)

        {

            var eventType = methodInfo.GetParameters()[0].ParameterType;

 

            return (IEventHandler<T>)Activator.CreateInstance(

                  typeof(EventHandler<,>)

                  .MakeGenericType(typeof(T), eventType),

                  methodInfo

                  );

        }

    }

The trick here is to create a static delegate with two parameters from an instance method info that take one parameter (and one implicit this target).

This way, the delegate is not tied to a specific instance and can be used on any target.

As you can see, this option requires more code ! I did not want to start with that.

+Pros : Convention base names mean no manual mapping, mapping is implicit
            Binding is made a class level instead of instance level

-Cons : Only unit tests can tell when you mess with names
            Not immune to event name change, should have good unit tests !

Apply then Append

I also had a remark that if I forget Append after Apply, I’ll get in trouble.

In Handler Registration option and Convention base method naming, the dispatch can be done by the base class, so I could tell the base class to dispatch then Append then event to UncommittedEvents.

This way you end with something like :

            var @event = new BookLent(/**/);

            Play(@event);

where play dispatches the event to the right method and appends.

This way you cannot forget.

My problem with this, especially in the Convention base method naming scenario is that nobody references the event application methods anymore. Resharper will report them as unused methods, and you won’t know unless you run unit tests.

Moreover, you pay the cost of a dynamic dispatch when you know your event type.

Perhaps something like this could be better :

            var @event = new BookLent(/**/);

            Play(@event).With(OnBookLent);

the implementation is not very complicated :

    public class AggregateRoot<TId>

    {

        private readonly UncommittedEvents uncommittedEents;

 

        protected EventPlayer<TEvent> Play<TEvent>(TEvent @event)

        {

            return new EventPlayer<TEvent>(@event, uncommitedEvents);

        }

        ... rest of the code here

    }

 

    public struct EventPlayer<TEvent>

    {

        private readonly TEvent @event;

        private readonly UncommittedEvents uncommittedEvents;

        internal EventPlayer(TEvent @event, UncommittedEvents uncommittedEvents)

        {

            this.@event = @event;

            this.uncommittedEvents = uncommittedEvents;

        }

 

        public void With(Action<TEvent> handler)

        {

            handler(@event);

            uncommittedEvents.Append(@event);

        }

    }

This way, methods are referenced at least once with type check.

My mind is still not set… What do you prefer ?

Monday, November 2, 2009

Event Sourcing and CQRS, Let's use it.

Last time, we started a very basic Event Sourcing/Domain Events/CQRS framework. Be careful, I made an edit in the nested DomainEvents+Handler<T>.Handles<E>() method, the AggregateRoot.Replay method will not work as is, but we won’t need it.

We’ll build an equally simplistic application for personal library management.

The Ubiquitous Language will be minimal.

A Book can be Registered with a Title and an ISBN.

A Book can be Lent to a Borrower at some Date for an Expected Time Span.

A Book can then be Returned. If it is Returned after Expected Time Span, the return is Late.

That’s enough for our first try.

The Command Context

The State Change Events

Here is the code for the three events that we found in the Ubiquitous language:

    public class BookRegistered

    {

        public readonly BookId Id;

        public readonly string Title;

        public readonly string Isbn;

 

        public BookRegistered(BookId id, string title, string isbn)

        {

            Id = id;

            Title = title;

            Isbn = isbn;

        }

    }

 

    public class BookLent

    {

        public readonly BookId Id;

        public readonly string Borrower;

        public readonly DateTime Date;

        public readonly TimeSpan ExpectedDuration;

 

        public BookLent(BookId id, string borrower, DateTime date,

               TimeSpan expectedDuration)

        {

            Id = id;

            Borrower = borrower;

            Date = date;

            ExpectedDuration = expectedDuration;

        }

    }

 

    public class BookReturned

    {

        public readonly BookId Id;

        public readonly string By;

        public readonly TimeSpan After;

        public readonly bool Late;

 

        public BookReturned(BookId id, string @by, TimeSpan after,

             bool late)

        {

            Id = id;

            By = by;

            After = after;

            Late = late;

        }

    }

These events will usually be serialized to the event storage and on a service bus, but here everything runs in memory.

The Book Aggregate Root

The book will need to be referenced by an identity in our system. We’ll hide a Guid behind a BookId struct :

    public struct BookId : IEquatable<BookId>

    {

        private Guid id;

 

        private BookId(Guid id) { this.id = id; }

 

        public static BookId NewBookId() { return new BookId(Guid.NewGuid()); }

 

        public bool Equals(BookId other) { return other.id.Equals(id); }

 

        public override bool Equals(object obj)

        {

            if (ReferenceEquals(null, obj)) return false;

            if (obj.GetType() != typeof(BookId)) return false;

            return Equals((BookId)obj);

        }

 

        public override int GetHashCode() { return id.GetHashCode(); }

    }

Now, the Book class itself :

  public class Book : AggregateRoot<BookId>

  {

      private readonly BookId id;

      private string title;

      private string isbn;

      private string borrower;

      private DateTime date;

      private TimeSpan expectedDuration;

 

      public Book(BookId id, IEnumerable<object> events)

      {

          this.id = id;

          foreach (dynamic @event in events)

              Apply(@event);

      }

 

      public Book(BookId id, string title, string isbn)

      {

          this.id = id;

          var @event = new BookRegistered(id, title, isbn);

          Apply(@event);

          Append(@event);

      }

 

      public override BookId Id { get { return id; } }

 

      public void Lend(string borrower, DateTime date,

                              TimeSpan expectedDuration)

      {

          if (this.borrower != null)

              throw new InvalidOperationException("The book is already lent.");

          var @event =

                new BookLent(id, borrower, date, expectedDuration);

          Apply(@event);

          Append(@event);

      }

 

      public void Return(DateTime returnDate)

      {

          if (borrower == null)

              throw new InvalidOperationException("The book has not been lent.");

          if (returnDate < date)

              throw new ArgumentException(

                "The book cannot be returned before being lent.");

          var actualDuration = returnDate - date;

          var @event = new BookReturned(

                         id,

                         borrower,

                         actualDuration,

                         actualDuration > expectedDuration);

          Apply(@event);

          Append(@event);

      }

 

      private void Apply(BookRegistered @event)

      {

          title = @event.Title;

          isbn = @event.Isbn;

      }

 

      private void Apply(BookLent @event)

      {

          borrower = @event.Borrower;

          date = @event.Date;

          expectedDuration = @event.ExpectedDuration;

      }

 

      private void Apply(BookReturned @event)

      {

          borrower = null;

      }

  }

The class implements AggregateRoot<BookId> and so provides an explicitly implemented UncommittedEvents property.

The first .ctor is used to load the Aggregate Root, the second one is used to build a new Aggregate Root.

The public methods (Lend and Return) are the commands on the Aggregate Root as defined in the Ubiquitous Language.

The structure is always the same :

  1. Validate arguments and state
  2. Prepare state transition using domain logic
  3. Apply state transition (no domain logic should happen here)
  4. Append state transition to uncommitted events

The first .ctor uses dynamic to dispatch each event object on the corresponding specific Apply method. In case you implement the pattern is previous C# version, it is advised to provide a Replay method in the base class that will perform the dynamic dispatch based on reflection.

That’s all for the entity. No ORM, no mapping… easy.

The Repository

It is often clearer to provide a specific repository interface that exposes only available methods. With event sourcing, it’s not that useful… we’ll write it anyway in case you’d like to use dependency injection. The interface is part of the domain and should be in the same assembly as the entity and the events.

    public interface IBookRepository

    {

        void Add(Book book);

        Book this[BookId id] { get; }

    }

The implementation will simply derive from the Repository base class, it can be in the application assembly.

    internal class BookRepository :

        Repository<BookId, Book>,

        IBookRepository

    {

        protected override Book CreateInstance(BookId id,

            IEnumerable<object> events)

        {

            return new Book(id, events);

        }

    }

Add and the indexer are implemented by the base class. The only thing to provide is a way to instantiate the class with expected parameters.

We could use Activator.CreateInstance or reflection to provide a generic implementation. I choose to make it simpler to read.

The Query context

The Report Database

We’ll mimic a reporting table of book lent state :

This would be the data returned from table rows :

    public class BookState

    {

        public BookId Id { get; set; }

        public string Title { get; set; }

        public bool Lent { get; set; }

    }

And this will hide the data table implementation :

    public interface IBookStateQuery

    {

        IEnumerable<BookState> GetBookStates();

        BookState GetBookState(BookId id);

        IEnumerable<BookState> GetLentBooks();

 

        void AddBookState(BookId id, string title);

        void SetLent(BookId id, bool lent);

    }

We can simply query data to report in the UI, and update data state.

Implementation will be in memory for now :

    class BookStateQuery : IBookStateQuery

    {

        private readonly Dictionary<BookId, BookState> states =

                     new Dictionary<BookId, BookState>();

 

        public IEnumerable<BookState> GetBookStates()

        {

            return states.Values;

        }

 

        public BookState GetBookState(BookId id)

        {

            return states[id];

        }

 

        public IEnumerable<BookState> GetLentBooks()

        {

            return states.Values.Where(b => b.Lent);

        }

 

        public void AddBookState(BookId id, string title)

        {

            var state = new BookState { Id = id, Title = title };

            states.Add(id, state);

        }

 

        public void SetLent(BookId id, bool lent)

        {

            states[id].Lent = lent;

        }

    }

The important point here is that no domain logic occurs.

A RDBMS implementation could use an ORM or simply build DTOs from a DataReader.

The event handlers

We can now denormalize domain states to the reporting database using an event handler :

    class BookStateHandler :

    Handles<BookRegistered>,

    Handles<BookLent>,

    Handles<BookReturned>

    {

        private readonly IBookStateQuery stateQuery;

 

        public BookStateHandler(IBookStateQuery stateQuery)

        {

            this.stateQuery = stateQuery;

        }

 

        public void Handle(BookRegistered @event)

        {

            stateQuery.AddBookState(@event.Id, @event.Title);

        }

 

 

        public void Handle(BookLent @event)

        {

            Console.WriteLine("Book lent to {0}", @event.Borrower);

            stateQuery.SetLent(@event.Id, true);

        }

 

        public void Handle(BookReturned @event)

        {

            Console.WriteLine("Book returned by {0}", @event.By);

            stateQuery.SetLent(@event.Id, false);

        }

    }

The Console.WriteLine are here to view when things happen, you would usually not use it in your production code. Logging this would not provide much benefits since all the events are already stored in the EventStorage.

Using this handler, the IBookStateQuery will be up to date with current Command Context state. In an asynchronous environment, this is where eventual consistency is introduced.

We will also add a service that will notify when a user returned a book too late :

    class LateReturnNotifier :

    Handles<BookReturned>

    {

        public void Handle(BookReturned @event)

        {

            if (@event.Late)

            {

                Console.WriteLine("{0} was late", @event.By);

            }

        }

    }

Here again, no domain logic, we just do the infrastructure stuff, usually sending an email or a SMS.

View it in Action

    class Program

    {

        static void Main(string[] args)

        {

            ISessionFactory factory = new SessionFactory(new EventStorage());                IBookStateQuery query = new BookStateQuery();

 

            DomainEvents.RegisterHanlder(() => new BookStateHandler(query));

            DomainEvents.RegisterHanlder(() => new LateReturnNotifier());

 

            var bookId = BookId.NewBookId();

            using (var session = factory.OpenSession())

            {

                var books = new BookRepository();

                books.Add(new Book(bookId,

                   "The Lord of the Rings",

                   "0-618-15396-9"));

                session.SubmitChanges();

            }

 

            ShowBooks(query);

 

            using (var session = factory.OpenSession())

            {

                var books = new BookRepository();

                var book = books[bookId];

                book.Lend("Alice",

                     new DateTime(2009, 11, 2),

                     TimeSpan.FromDays(14));

 

                session.SubmitChanges();

            }

 

            ShowBooks(query);

 

 

            using (var session = factory.OpenSession())

            {

                var books = new BookRepository();

                var book = books[bookId];

                book.Return(new DateTime(2009, 11, 8));

 

                session.SubmitChanges();

            }

 

            ShowBooks(query);

 

 

            using (var session = factory.OpenSession())

            {

                var books = new BookRepository();

                var book = books[bookId];

                book.Lend("Bob",

                      new DateTime(2009, 11, 9),

                      TimeSpan.FromDays(14));

 

                session.SubmitChanges();

            }

 

            ShowBooks(query);

 

 

            using (var session = factory.OpenSession())

            {

                var books = new BookRepository();

                var book = books[bookId];

                book.Return(new DateTime(2010, 03, 1));

                session.SubmitChanges();

            }

 

            ShowBooks(query);

        }

 

        private static void ShowBooks(IBookStateQuery query)

        {

            foreach (var state in query.GetBookStates())

                Console.WriteLine("{0} is {1}.",

                       state.Title,

                       state.Lent ? "lent" : "home");

        }

    }

We start by instantiating storage for the command context (the ISessionFactory) and the query context (the IBookStateQuery). In production you’ll use persistent storages (a persistent event storage and a RDBMS). I highly recommend using a Dependency Injection Container for real size projects.

Then we wire the handlers on domain events.

The application can start.

  • We register a book in the library.
  • We lend it to Alice on 2009-11-02 for 14 days
  • She returns it on 2009-11-08, she’s on time
  • We lend it to Bob on 2009-11-09 for 14 days,
  • He returns it on 2010-03-01, he’s late

The output is the following :

The Lord of the Rings is home.    // written from state

Book lent to Alice                // written by the book state handler

The Lord of the Rings is lent.    // written from state

Book returned by Alice            // written by the book state handler

The Lord of the Rings is home.    // written from state

Book lent to Bob                  // written by the book state handler

The Lord of the Rings is lent.    // written from state

Book returned by Bob              // written by the book state handler

Bob was late                      // written by the late return notifier

The Lord of the Rings is home.    // written from state

We have here a clear separation between Command that handles the domain logic and Query that handles presentation logic.

Have fun. Questions and remarks expected !

Friday, October 30, 2009

Event Sourcing and CQRS, Now !

Enough talking, Action !

Today, we will build a basic event sourcing infrastructure. Get the beta 2 of Visual Studio 2010, we’ll be using C# dynamic features to go straight to our goal.

Then Event Storage

Let’s hide the ugly details of the event storage behind two simple interfaces :

    public interface IEventStorage : IDisposable

    {

        IAggregateRootStorage<TId> GetAggregateRootStore<TAggregateRoot, TId>()

            where TAggregateRoot : AggregateRoot<TId>;

    }

 

    public interface IAggregateRootStorage<in TId>

    {

        void Append(TId id, IEnumerable<object> events);

        IEnumerable<object> this[TId id] { get; }

    }

And we start with a minimal in memory implementation, the event storage first :

    public class EventStorage : IEventStorage

    {

        private readonly Dictionary<Type, dynamic> stores = new Dictionary<Type, dynamic>();

 

        public IAggregateRootStorage<TId> GetAggregateRootStorage<TAggregateRoot, TId>()

           where TAggregateRoot : AggregateRoot<TId>

        {

 

            dynamic store;

            if (!stores.TryGetValue(typeof(TAggregateRoot), out store))

            {

                store = new AggregateRootStorage<TId>();

                stores.Add(typeof (TAggregateRoot), store);

            }

            return store;

        }

 

        public void Dispose()

        {

            stores.Clear();

        }

    }

Here I could replace dynamic by object, and cast to requested type on return. I use dynamic because this kind of code is not compile time safe anyway. There’s a specific storage for each Aggregate Root type, especially depending on identifier type, for type safety.

Then the AggregateRootStorage :

    class AggregateRootStorage<TId> : IAggregateRootStorage<TId>

    {

        private readonly Dictionary<TId, List<object>> store = new Dictionary<TId, List<object>>();

 

        public void Append(TId id, IEnumerable<object> events)

        {

            List<object> aggregateRootEvents;

            if (!store.TryGetValue(id, out aggregateRootEvents))

            {

                aggregateRootEvents = new List<object>();

                store.Add(id, aggregateRootEvents);

            }

 

            aggregateRootEvents.AddRange(events);

        }

 

        public IEnumerable<object> this[TId id]

        {

            get { return store[id]; }

        }

    }

It simply stores list of events associated with aggregate root identifier.

The Aggregate Root

Aggregate roots manage uncommitted events :

    public interface IUncommittedEvents : IEnumerable<object>

    {

        bool HasEvents { get; }

        void Commit();

    }

The interface can indicates whether there are events, returns the events, and clears the uncommitted events by committing.

Quick implementation :

    internal class UncommittedEvents : IUncommittedEvents

    {

        private readonly List<object> events = new List<object>();

 

        public void Append(object @event)

        {

            events.Add(@event);

        }

 

        IEnumerator<object> IEnumerable<object>.GetEnumerator()

        {

            return events.GetEnumerator();

        }

 

        public bool HasEvents

        {

            get { return events.Count != 0; }

        }

 

        void IUncommittedEvents.Commit()

        {

            events.Clear();

        }

 

        IEnumerator IEnumerable.GetEnumerator()

        {

            return events.GetEnumerator();

        }

    }

Nothing tricky here neither.

Now, the IAggregateRoot interface used by the repository gives access to the uncommitted events:

    public interface IAggregateRoot<out TId>

    {

        TId Id { get; }

        IUncommittedEvents UncommittedEvents { get; }

    }

The AggregateRoot class will maintain the uncommitted events :

    public abstract class AggregateRoot<TId> : IAggregateRoot<TId>

    {

        private readonly UncommittedEvents uncommittedEvents = new UncommittedEvents();

 

        protected void Replay(IEnumerable<object> events)

        {

            dynamic me = this;

            foreach (var @event in events)

                me.Apply(@event);

        }

 

        protected void Append(object @event)

        {

            uncommittedEvents.Append(@event);

        }

 

        public abstract TId Id { get; }

 

        IUncommittedEvents IAggregateRoot<TId>.UncommittedEvents

        {

            get { return uncommittedEvents; }

        }

    }

The Append method will be use by child class to append events after they are applied.

The Replay method is used in the child class constructor to rebuild the Aggregate Root state from events.

Here I use a dynamic me variable to dispatch events on specific child class Apply methods. A .Net 2 or 3.5 implementation would use reflection to dispatch events on Apply methods.

The UncommittedEvents property is implemented explicitly so that it does not appear in standard class use.

The Repository

The repository is just very slightly longer. I added a session concept so that several repositories can submit changes in a single transaction :

    internal interface ISessionItem

    {

        void SubmitChanges();

    }

 

     public abstract class Repository<TId, TAggregateRoot> : ISessionItem

        where TAggregateRoot : AggregateRoot<TId>

    {

        private readonly Dictionary<TId, TAggregateRoot> users = new Dictionary<TId, TAggregateRoot>();

        private readonly IAggregateRootStorage<TId> aggregateRootStorage;

 

        protected Repository()

        {

            aggregateRootStorage = Session.Enlist(this);

        }

 

        public void Add(TAggregateRoot user)

        {

            users.Add(user.Id, user);

        }

 

        public TAggregateRoot this[TId id]

        {

            get { return Find(id) ?? Load(id); }

        }

 

        private TAggregateRoot Find(TId id)

        {

            TAggregateRoot user;

            return users.TryGetValue(id, out user) ? user : null;

        }

 

        private TAggregateRoot Load(TId id)

        {

            var events = aggregateRootStorage[id];

            var user = CreateInstance(id, events);

 

            users.Add(id, user);

 

            return user;

        }

 

        protected abstract TAggregateRoot CreateInstance(TId id, IEnumerable<object> events);

 

        public void SubmitChanges()

        {

            foreach (IAggregateRoot<TId> user in users.Values)

            {

                var uncomitedEvents = user.UncommittedEvents;

                if (uncomitedEvents.HasEvents)

                {

                    aggregateRootStorage.Append(user.Id, uncomitedEvents);

                    PublishEvents(uncomitedEvents);

                    uncomitedEvents.Commit();

                }

            }

            users.Clear();

        }

 

        protected void PublishEvents(IUncommittedEvents uncommittedEvents)

        {

            foreach (dynamic @event in uncommittedEvents)

                DomainEvents.Raise(@event);

        }

 

    }

The constructor enlist the repository in current session.

The Add method registers the aggregate root in the repository, its events will be persisted in SubmitChanges()

The indexer finds an entity already in memory or loads it from the event store. The abstract CreateInstance method implementation will be responsible for instantiation.

Submit changes does what is expected, and also publish committed events. Will see the trick with dynamic @events when we analyze domain events.

The Session and its Factory

Just to group the SubmitChanges on several repositories :

    public interface ISessionFactory : IDisposable

    {

        ISession OpenSession();

    }

 

    public class SessionFactory : ISessionFactory

    {

        private readonly IEventStorage eventStorage;

 

        public SessionFactory(IEventStorage eventStorage)

        {

            this.eventStorage = eventStorage;

        }

 

        public ISession OpenSession()

        {

            return new Session(eventStorage);

        }

 

        public void Dispose()

        {

            eventStorage.Dispose();

        }

    }

 

    public interface ISession : IDisposable

    {

        void SubmitChanges();

    }

 

    public class Session : ISession

    {

        private readonly IEventStorage eventStorage;

        private readonly HashSet<ISessionItem> enlistedItems = new HashSet<ISessionItem>();

 

        [ThreadStatic] private static Session current;

 

        internal Session(IEventStorage eventStorage)

        {

            this.eventStorage = eventStorage;

            if (current != null)

                throw new InvalidOperationException("Cannot nest unit of work");

 

            current = this;

        }

 

        private static Session Current

        {

            get { return current; }

        }

 

        public void SubmitChanges()

        {

            foreach (var enlisted in enlistedItems)

                enlisted.SubmitChanges();

 

            enlistedItems.Clear();

        }

 

        public void Dispose()

        {

            current = null;

        }

 

        internal static IAggregateRootStorage<TId> Enlist<TId, TAggregateRoot>

                        (Repository<TId, TAggregateRoot> repository)

            where TAggregateRoot : AggregateRoot<TId>

        {

            var unitOfWork = Current;

            unitOfWork.enlistedItems.Add(repository);

            return unitOfWork.eventStorage.GetAggregateRootStorage<TAggregateRoot, TId>();

        }

    }

Ok almost everything is here. The last part, for events to be used by other parts of the system, needed to go CQRS.

Domain Events

Here, I made a minor variation on Udi Dahan’s DomainEvents implementation :

public static class DomainEvents

    {

        [ThreadStatic] private static List<Delegate> actions;

 

        private static List<Handler> handlers;

 

        public static void Register<T>(Action<T> callback)

        {

            if (actions == null) actions = new List<Delegate>();

            actions.Add(callback);

        }

 

        public static void RegisterHanlder<T>(Func<T> factory)

        {

            if (handlers == null) handlers = new List<Handler>();

            handlers.Add(new Handler<T>(factory));

        }

 

        //Raises the given domain event      

        public static void Raise<T>(T @event)

        {

            if (actions != null)

                foreach (Delegate action in actions)

                    if (action is Action<T>)

                        ((Action<T>) action)(@event);

 

            if (handlers != null)

                foreach (var h in handlers)

                {

                    if (h.Handles<T>())

                    {

                        var handler= h.CreateInstance<T>();

                        handler.Handle(@event);

                    }

                }

        }

 

        private abstract class Handler

        {

            public abstract bool Handles<E>();

            public abstract Handles<E> CreateInstance<E>();

        }

 

        private class Handler<T> : Handler

        {

 

            private readonly Func<T> factory;

 

            public Handler(Func<T> factory)

            {

                this.factory = factory;

            }

 

            public override bool Handles<E>()

            {

                return typeof (Handles<E>)

                   .IsAssignableFrom(typeof (T));

            }

 

            public override Handles<E> CreateInstance<E>()

            {

                return (Handles<E>)factory();

            }

        }

    }

 

    public interface Handles<in T>

    {

        void Handle(T @event);

    }

Edit : Changed the Handles<E>, T should be casted as Handles<E>, not as E, of course.

Event handlers can be registerd as Action<T> delegates or as class that implements once or more Handles<T>.

The private Handler and Handler<T> classes are used to find handlers that handles a specific message and dispatch it, without using a Dependency Injection Container like Udi’s implementation.

The simple dynamic-fu in the repository was to call DomainEvents.Raise<T> using a dynamic dispatch. This way, Raise is always called with the actual event type in T. No tricky reflection is needed for the dispatch. inside Raise<T>, we can the rely on T as being the actual event type. Funky !

Next Time…

There’s already a lot of code for a single post, every thing is in place at infrastructure level. You can already try it for yourself if you can figure how to.

The sample will come in the next post, stay tuned.

Wednesday, October 28, 2009

Uniqueness validation in CQRS Architecture

This is a short follow up on Bjarte’s Post.

There’s an important thing to consider when needing set validation : why ?

Why do these things need to be considered together and cannot just be handled separately ?

We can distinct two different parameters in uniqueness, Cardinality and Scope.

Cardinality

There are mainly two types of cardinality :

1 Cardinality

Only one employee can be the boss.

The model could provide a IsBoss property on every employee… But constancy would be very hard to achieve, especially in a CQRS architecture.

We should read the preceding rule as :

The company has only one boss. The boss is an employee.

Now, we can model a Boss property on the Company Aggregate Root that will reference the employee that is the boss. Changing the boss can now be an atomic and consistent operation.

We can see that we had to introduce a upper level to manage it (we’ll se it in the Scope section).

n Cardinality

Employee should have different user names.

We can clearly see here that user names must be different because they’ll act as identifiers. This is the goal of almost any uniqueness constraint. The property will be used as a key in a lookup.

The 1 (or 2 or 3) cardinality also act this way. It’s a way to tag an entity. You can ask “who is the boss ?” and get the answer by a simple lookup at the Boss property that acts like a bucket in a hash table.

Scope

There is no such thing as global scope

Even when we say, “Employee should have different user names”, there is a implicit scope, the Company.

Even when we say, “You Id Card number should be unique”, understand, “at the Country scope”.

Even when we say, “Your DNA should be unique”, understand, “At our life understanding scope”.

Find the scope and see the volume of data whose uniqueness should be enforced.

As we said, properties that have a uniqueness constraint are usually used as lookup values to find those entities. As such they rarely take part in the child entity domain logic.

Instead of having a UserName property on the Employee entity, why not have a UserNames key/value collection on the Company that will give the Employee for a given user name ?

If the expected Employee count is expected to be in a limited range, this is the most appropriate solution.

If the number can grow, loading it in memory on each Company hydratation is a bit heavy, so keep the directory on disk (using a table with a unique key in a RDBMS as suggested by Bjarte) or any other way that provide the expected performance.

Conclusion

In every case, when a uniqueness constraint appear on a property, the property does not belong the the entity itself but should be viewed as a key to access the entity from the upper level scope.

Do you have examples that cannot be solved this way ?

Thursday, October 22, 2009

C# Static interfaces - Take 2

As Romain was pointing in the comments, I totally missed to tell where I wanted to go with this static interface things. Need more sleep these days…

So here it is.

No I don’t want to do this

The point was not to enable something like this :

   int value = ICountable.Count;

Static interfaces have no implementation exactly like interfaces.

With interfaces, you need an instance (usually in a variable or member) to find the actual implementation and call it. With static interfaces, you need a type.

There are two ways to specify a type:

  • with its type name (Sample4.Count)
  • with a generic type parameter (T.Count)

I was also proposing a way to specify a type for extension methods.

Where it would be useful - operators

The reason why everybody is asking for static members in interfaces is ultimately to have operators in interfaces.

Imagine :

    public static interface IArithmetic<T>

    {

        static T operator +(T x, T y);

        static T operator -(T x, T y);

        static T operator *(T x, T y);

        static T operator /(T x, T y);

    }

Now you can write generic code like :

        public static T Power<T>(this T value, int count) where T : IArithmetic<T>

        {

            var result = T.Identity;

 

            for (int i=0;i<count;i++)

                result = result*value;

 

            return result;

        }

Cool !

This way, no need for the 20 overloads of Enumerable.Sum, it would work for any type presenting the expected static interface.

Wednesday, October 21, 2009

C# Static interfaces

Read the latest post on the subject:

C# Static Interfaces - Take 3

No DDD today, let’s talk a bit about our favorite language after a short night (I should really tell my neighbors that 3am is not the good time to move their furniture all around the flat).

You can find requests for static methods in interfaces all over the internet..

But there are good reasons not to.

According to Eric Lippert, the main reasons is the difference in inheritance between static methods and instance method du to the absence of shared slots between static methods.

Mixing both static methods and instance methods in interfaces would lead to a real nightmare when you try to understand what really happens.

But why does this question arise so often then ? What’s really needed ?

Static classes as type instances

Let’s take a simple class with both static and instance members :

    class Sample

    {

        // static part

        private static int count;

        public static int Count { get { return count; } }

 

        // instance part

 

        private readonly string name;

 

        public Sample(string name) { this.name = name; }

 

        public void Method()

        {

            count++;

            Console.WriteLine("Total count {0} incremented by {1}", count, name);

        }

    }

Here, Count is a static Property. Static part is different from instance part in that static part exist only once per type.

But we could see static part as being an object with reference is type name.

Why would these object not have interfaces ?

Let refactor this a bit :

 

    public class Sample2

    {

        public sealed class Sample2Class

        {

            internal int count;

            public int Count { get { return count; } }

        }

        public static readonly Sample2Class Class = new Sample2Class();

 

        private readonly string name;

 

        public Sample2(string name)

        {

            this.name = name;

        }

 

        public void Method()

        {

            Class.count++;

            Console.WriteLine("Total count {0} incremented by {1}", Class.count, name);

        }

    }

Here, the only static member is Class, that acts as a singleton for the type. Note that I had to change the count modifier to internal. The behavior is not the same, but it’s conceptually equivalent.

We can make something less extreme :

 

    public class Sample3

    {

        private static int count;

        public static int Count { get { return count; } }

 

        private readonly string name;

 

        public static readonly Sample3Class Class = new Sample3Class();

 

        public sealed class Sample3Class

        {

            public int Count { get { return Sample3.Count; } }

        }

 

        public Sample3(string name) { this.name = name; }

 

        public void Method()

        {

            count++;

            Console.WriteLine("Total count {0} incremented by {1}", count, name);

        }

    }

 

Here, we added only a proxy of public methods and properties on the singleton class.

We could define an interface that would be implemented by Sample3Class that would provide the missing slot concept that Eric Lipperts talk about.

We can also see here that there is no point mixing static and instance method in interface since inheritance rules differs.

Static Interface

Imagination at work. Let’s define static interface as we can define static classes :

    public static interface ICountable

    {

        static int Count { get; }

    }

and implement it on our sample :

    public class Sample4 : ICountable

    {

        private static int count;

        public static int Count { get { return count; } }

 

        private readonly string name;

 

        public Sample4(string name) { this.name = name; }

 

        public void Method()

        {

            count++;

            Console.WriteLine("Total count {0} incremented by {1}", count, name);

        }

    }

The C# compiler would be responsible for creating a singleton stub in the class. Since the goal is to provide a function table, this could also be handled at a lower level by the CLI.

Now, we can have interesting language extensions.

Static member access in generics

Let see it in action :

        public static bool IsInstanciated<T>() where T: ICountable

        {

            return T.Count != 0;

        }

There is no ambiguity when using this method since values of parameter T are known at compilation time. The C# compiler could replace the static call with a interface call on the proxy. On a CLI version, the JIT compiler could resolve it once and for all and emit a static call. Use of a child type whose parent type implements the interface should not be allowed, the compiler could report this.

This could be combined with an extension method syntax to provide extension methods on types :

        public static bool IsInstanciated(static ICountable type)

        {

            return type.Count != 0;

        }

This one is a bit twisted, and I would gracefully admit changes in the syntax, but the point is that it would enable on type/classes the same kind of thing that Linq offers with interfaces : put the minimum set of methods in interface, then add lots of features around interface with extension methods.

Back to reality

Ok, I should sleep at night, but who knows, it’s perhaps useful.

If anyone sees other applications… just drop a comment.

Friday, July 24, 2009

DDD and Code ReUse

I read several discussions against Code ReUse and layered architectures recently :recycle-logo

Different kinds of Code ReUse

You can split your code with different concern :

  • Object model extensions
  • Technical Infrastructure
  • Application code

The first two are good candidates for Code ReUse.

By Object model extensions I’m talking about things that make your code writing less tedious at language level or object model level.

Example of such code are :

  • IEnumerable and Enumerable
  • Collections
  • Reflection helpers
  • Dependency Injection framework

By Technical Infrastructure I mean things that make your code run in its environment :

  • Generic Service Host,
  • ORM, Data Layer
  • Format serializers / deserializers
  • Configuration helpers
  • Communication frameworks (WCF, Service Buses)
  • UI frameworks (MVC, WPF)

The last part is Application code, and here, things are really different.

Application Code ReUse

For long I’ve been writing business code in libraries. I began then to notice problems concerning code and data locality.

When you have a single application (process), no problem.

But if two applications need to modify the same entities, the solution would be to use the same library in both applications so that there is no code duplication. It seems good practice but you quickly stumble on several problems – I’m sure you already experienced it :

  • Synchronization : the same data will be accessed in the same db from two application, how do you manage conflicts
  • Deployment : when you fix bugs or add features, you must redeploy every application that has a dependency on the library. It slows down the release cycle and make it more risky, changes have more impact.
  • Code locality : when a problem arises, you have to find which application it comes from.

Let’s examine DDD patterns to see how they fit with reuse :

Services

Let’s start easy. Services are stateless, they should deliver simple action. But to preserve encapsulation the better is to put services as true services in their own process (web service, windows service, service on a service bus..).

This way, synchronization is managed in process, deployment is a breeze, and no problem with code locality – code executes in one place.

Entities

Entities are retrieved through a Repository Service, hence, they should follow the same rules as Services.

This way, the implementation of a repository that access the database is truly an implementation details. Anyone who wants to talk to an entity sends a command to it, a handler service get the entity from the repository, and pass the command to the entity. The configuration to access the database is local the to the process.

Here again, same benefits.

Moreover entities should always have different meanings in different bounded contexts, the should have different implementations, so there is no real reason for reuse.

Value Objects

Value objects are a bit different.

Some object are very specific to a bounded context and don’t need to be reuse outside.

Some other can be a good way to encapsulate some shared concepts. Money is usually a good example, but there can also be concepts more specific to the domain (you will find them as words that come in the Ubiquitous Language of different Bounded Contexts).

They can be shared among different contexts, but rarely between different domains. There are exceptions for very very generic concept like money, but even money often needs to be tweaked a bit for each domain…

 

Service Bus to the rescue

Once each bounded context is split, you need to organize communications between parts. Here comes the Service Bus and Messages, but now, the only shared parts in the application are :

  • Object model extensions (to code faster and cleaner)
  • Technical infrastructure (so that each process is fully equipped, and there’s not much technical fuss in the application code)
  • General use Value Objects (to manipulate domain concepts instead of int and decimal)
  • Messages (to communicate between contexts)

You could also use web services, but it makes the overall structure less fault tolerant, harder to deploy, and more tightly coupled.

Once you’ve decoupled bounded context using messages, the rest is just an internal implementation detail, why would you want to reuse it !

Tuesday, June 23, 2009

Distributed Domain Driven Design and Aggregates

Once again, Gojko Adzic comes with an excellent post : Improving performance and scalability with DDD.

Aggregates are often a bit underused in DDD because they’re difficult to grasp. They’re often seen as a solution to a technical problem. Gojko shows here how to understand them at a domain level. This post gives a clear vision of the role of Aggregates in DDD by placing it in the context of distributed environments.

You should read it !

Where are my Entities and my Repositories ?

During Evans’ talk at ParisJug, some attendees where surprised that there was no mention of Entities or Repositories

Quite a lot of people where introduced to Domain Driven Design by these aspects and see these as the main advance of DDD.

In contrast, Eric Evans explained to me he had some regrets he placed these patterns so early in the book. Many readers think they know the most important at this point and stop reading after chapter 6.

Actually those patterns are only object model patterns that enable Separation of Concerns and Persistence Ignorance, but can also be used in context that are not Domain Driven at all like CRUD.

Of course, if you want to implement a domain independent of all infrastructure concerns, good OO practices will be required, but those practices won’t make your application Domain Driven (don’t understand that not following DDD would make your design a bad design… you apply DDD if you want and if you need to, but achieving persistence ignorance won’t mean you practice DDD.)

But you’ve more to learn from Strategic Design, Bounded Contexts and distillation of the Core Domain etc.

- page 2 of 4 -