Tuesday, 20 December 2016

Basic SaaS conversion guide, migrate your application to the Azure Cloud

A lot of organisations are currently converting their applications to SaaS. Unfortunately not much is written about “conversion & migration” (that I actually found useful), and I could have done with an article like this few months ago.

Objective of this article is to give you high level pragmatic overview of what you will need to think about, and do, when your company decides to convert / migrate your app to the Cloud.
This article will expose the unknown unknowns, force you to rethink your assumptions, ask you some tough provoking questions and offer some tips so that you can give your company better estimates and an idea of what involved.

Before we start, I would like to let you know that I was involved in the B2B SaaS conversion project, and migrated to Microsoft Azure. I will be drawing from my personal experience a lot, by doing so, hopefully I will be able to pass on some real world experience.

This article is broken into two parts:
  • Business section is for executives and software architects, it introduces you to some of Cloud concepts, it’s suppose to make you think about the wider company, costs, why you are migrating and what’s going to be involved.
  • Technical section is for software architects and developers, it’s suppose to make you think about how you can go about the migration and what technical details you will need to consider.

Business: Costs, Why and Impact.


1. Understand the business objectives

Converting your app to SaaS is not going to be cheap, it can take from few weeks to number of months (depending on the product size). So before you start working on this, understand why the business wants to migrate.

Few good reasons to migrate:
  • Removal of existing data center (cost saving) 
  • International expansion (cost saving) 
  • A lot of redundant compute that is used only during peak times (cost saving) 
  • Cloud service provider provides additional services that can complement your product (innovation)

It’s vital that you understand the business objectives and deadlines, otherwise you will not be able to align your technical solution with the business requirements.

2. Understand the terminology


Here are some terms you need to get to know:

Term Wikipedia Definition
Multi Tenancy The term “software multitenancy” refers to a software architecture in which a single instance of software runs on a server and serves multiple tenants. A tenant is a group of users who share a common access with specific privileges to the software instance.
IaaS (Infrastructure as a Service) “Infrastructure as a Service (IaaS) is a form of cloud computing that provides virtualized computing resources over the Internet. IaaS is one of three main categories of cloud computing services, alongside Software as a Service (SaaS) and Platform as a Service (PaaS).”
PaaS (Platform as a Service) “Platform as a service (PaaS) is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app.”
SaaS (Software as a Service) Software as a service (SaaS; pronounced /sæs/) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. It is sometimes referred to as “on-demand software”. SaaS is typically accessed by users using a thin client via a web browser.

Here is a picture that explains IaaS, PaaS and SaaS very well:




3. Migration Approaches


Migration Approach Description
Lift and shift Take your existing services and just migrate it to the IaaS (VMs), and in the future redesign your application for the Cloud.
Multi tenancy Redesign your application so that it supports multi tenancy, however for now ignore other benefits. This migration approach might use IaaS with combination of PaaS. For example you might host your application in a VM, but use Azure SQL.
Cloud native Redesign your application completely so that it’s Cloud native, it’s has built in multi tenancy and uses PaaS services. For example you will use Azure Web Apps, Azure SQL and Azure Storage.

What approach you are going to take will depend on skills within the organisation, and what if any, products that you already have in the Cloud and how much time you have before you need to hit the deadline.

I am not going to focus on lift and shift in this article as it’s just taking what you already have and moving it to the Cloud, strictly speaking that is not SaaS, it is “Hosted” on the Cloud.

4. Identify technology that you will need


Chances are your topology will be a combination of:

Service Technology Required
Front-end ASP.NET MVC, Angular, etc IIS with .NET Framework
Back-end Java, .NET, Node.JS, etc Compute node that can host JVM, or WCF and can listen on a port
Persistence SQL Server, MySQL, etc SQL Server 2016, Azure Storage
Networking Network segregation Sub networks, VNET, Web.Config IP Security, Access Control List

Understand what you actually need, don’t assume you need the same things. Challenge old assumptions, ask questions like:
  • Do you really need all of that network segregation? 
  • Do you really need to have front-end and back-end hosted on different machines? By hosting more services on a single machine you can increase compute density and save money. Need to scale? Scale the whole thing horizontally. 
  • Do we really need to use relational data store for everything? What about NoSQL? By storing binary data in Azure Storage you will reduce Azure SQL compute (DTUs) which will save you a lot of money long term. 
  • In this new world, do you still really need to support different on-prem platforms? SQL Server and MySQL? How about Linux and Windows? By not doing this, you can save on testing, hosting costs, etc.

5. Scalability


The greatest thing about Cloud is scalability. You can scale effortlessly horizontally (if you architect your application correctly). How will your application respond to this? What is currently the bottleneck in your architecture? What will you need to scale? Back-end? Front-end? Both? What about persistence?

Find out where the bottlenecks are now, so that when you redesign your application you can focus on them. Ideally you should be able to scale anything in your architecture horizontally, however time and budget might not allow you to do this, so you should focus on the bottlenecks.

6. Service Level Agreements (SLA) & Disaster Recovery


Services on the Cloud go down often, and often it’s fairly random what goes down, please do take a look at this Azure Status History.

The good news is that there is plenty of redundancy in the Cloud and you can just fail over or re-setup your services and you will be up and running again.

Something to think about:
  • What kind of SLA you will need, 99.9%? Is that per month or a year? 
  • What kind of Recovery Point Objectives will you need to reach? 
  • What kind of Recovery Time Objectives
  • will you need to reach? 
  • How much downtime did your prefered Cloud service provider has experienced recently? For example Azure had around 11 hours of downtime this year (2016). 
  • If you store some data in relational database, some in NoSQL, some on the queue, when your services all go down, how are you going to restore your backups? 
  • How will you handle data mismatch e.g. Azure SQL being out of sync with Azure Storage?

Higher SLA more likely you will need to move towards active-active architecture, but it’s complex, requires maintenance and lots of testing. Do you really need it? Will active-cold do?

Please check out this great Microsoft article: Designing resilient applications for Azure.

7. Security & Operations


Security and Operations requirements are different on the Cloud. Tooling is not as mature, especially if you are looking to use PaaS services, and tooling that is available out of the box takes time to get used to. For example Azure App Insights, Security Center, etc.
Find out what security controls will need to be in place, find out what logs and telemetry ops team will need to do their job.

8. Runtime cost model


Start putting together runtime cost model for different Cloud service providers. At this stage you know very little about these services, however you should start estimating the runtime costs. Cloud service providers normally provider tools to help you with this, for example Azure provides Azure Pricing Calculator.

9. Staff Training


Now that you have a rough idea what service provider you will choose, start looking into training courses. Training can be official e.g. staff can get Cloud Platform MCSA certification, or it can be more unofficial. However, no matter what you do, I encourage you to give your staff time to play around with different services, put together few PoCs, read few books e.g. Developing Microsoft Azure Solututions, watch some Azure videos e.g. Azure Friday.
This training will require some time, if you haven’t got time, consider consultancy.

10. Consultancy


If you would like to get some help there are number of things you can do with Azure: 
  • Hire Azure migration consultants 
  • Contact “Microsoft Developer eXperience” team, they might be able to help 
  • you with migration by investing in your project (free consultation & PoCs). 
  • Sign up for Microsoft “Premier Support for Developers”, you will get a dedicated Application Development Manager who will be able to assist you and guide you and your team.

I strongly advise you to grow talent internally and not solely rely on consultants. Hybrid approach works best, get some consultancy and get them to coach your internal team.

Technical: How are you going to do it?


1. Multi tenancy data segregation


Hosted applications tend to be installed on a machine and used by one company, however SaaS applications are installed and used by lots of different companies. With sharing comes great responsibility, there are number of ways that you can handle multi tenancy data segregation: 
  • Shared Database, Shared Schema (Foreign Keys partition approach) 
  • Shared Database, Separate Schema 
  • Separate Database

What approach you go for depends on your requirements. Microsoft has written an article Multi-Tenant Data Architecture, please read it.

What is my take? If your app is very simple (To Do List, Basic document store, etc), go for Foreign Key partition. If you are app is more complicated e.g. finance or medical system I would go for the separate database / separate persistence. Separate persistence will reduce the chance of data cross tenant data leak, make tenant deletion simpler and more importantly it will makes ops easier, you can upgrade their persistence individually, you can back them up more often if you need to, and most likely you will not need to worry about sharding. However, separate persistence approach is not simple to implement, it requires a lot of development.

2. Login Experience


Now that users access your application there are number of ways for them to access their 
tenant area: 
  • Dedicated URL 
  • Recognise Possible Tenants On Login

On the Cloud user experience will need to change, because many businesses are using it 
at the same time you need to somehow know who is who. There are number of ways you can do this:

Dedicated URL
You can give your tenants dedicated URL so that they can login into their app. For example you can give them access to: yourapp.com/customername or customername.yourapp.com.

However, this approach will require you to send out an email to the tenant, informing them 
what their URL is. If they forget the URL they will end up contacting your support team.

Recognise Possible Tenants On Login
Tenant goes to yourapp.com and logs in. When they login they presented with possible tenants that they can access.



With this approach tenants don’t need to remember their URL, however you need to introduce extra step before they login and you need to scan all possible tenants to see if the logged in user exists. This is more work, not to mention, what if your customer wants to use AzureAD? How do you know which AzureAD should be used? Now you will need to introduce mapping tables, extra screens, etc.

3. Application state, is your application already stateless?


If you need your application to scale out then your compute instances should not keep any state in memory or disk. Your application needs to be stateless.

However, if temporarily you need to keep some basic state, like session state then you are in luck, Microsoft Azure allows you to use ARR (Application Request Routing) so that all client requests get forwarded to the same instance every time. This should be a temporary solution as you will end up overloading single instance as it will take a while for cookies to expire and spread the load to other instances.

4. What cloud services should you use?


This largely comes down to the security and performance requirements. 

Azure Web Apps are great, however they don’t support VNETs, which means you can’t stop OSI level 3 traffic to Front-end and Back-end. However you can use Web.Config IP Security to restrict access to front-end, back-end and Azure SQL supports Access Control List (ACL). Azure Web Apps also don’t scale very well vertically, so if you have lots of heavy compute it might make more sense economically for you to use Service Fabric or Cloud Services so that you can scale vertically and horizontally.

However, Azure Web Apps require no maintenance, they are easy to use, they scale almost immediately and they are very cheap.

What is my take? Always go with PaaS services and avoid IaaS as much as you can. Let Azure do all of the operating system and service patching / maintenance. Go to IaaS only if you have no other choice.

5. Telemetry & Logging


If you are going to use pure PaaS services then most like you will not be able to use your traditional ops tools for telemetry. The good news is that ops tooling on the Cloud is getting better.

Azure has a tool called App Insights it allows you to: 
  • Store / view your application logs (info, warnings, exceptions, etc). You will need to change your logging appender to make this work. 
  • Analyse availability 
  • Analyse app dependencies 
  • Analyse performance 
  • Analyse failures 
  • Analyze CPU / RAM / IO (IaaS only)

Also there is a tool called Azure Security Center, it works very well if you have VMs in your subscription, it will guide you and give you best practices. It also has built in machine learning so it analyses your infrastructure and if it will see something unusual it will flag it up and and notify your ops team. It’s very handy.

6. Always test with many instances


You should be running your application with at least 2 compute instances. Last thing you want to happen is to come to the release date and scale out and find out that your application is not scaling. This means anything that can be scaled horizontally should be scaled so you are always testing.

7. Authentication


Multi tenancy makes authentication hard, it makes it even harder if your application is B2B. Why? Well because all businesses have different authentication requirements, they might have AD and they want to integrate it with your app and enable Single Sign On, they might be a hip start-up and they will want to use Facebook to login into your app. This should not be overlooked, you might need to find open source authentication server or use service such as AzureAD.

What is my take? Find an open source authentication server and integrate with it. Avoid AzureAD unless your customers pay for it. AzureAD is very expensive, basic tier cost around £0.60 per user and Premium 2 tier costs around £5.00 per user!

8. Global Config


Hosted applications tend to be installed, so they come with installers. These installers change app keys in the config. For example if your app integrates with Google Maps your installer might ask your customer to put in the Google Map app keys to enable Google Maps features. Now in the Cloud world this is very different, customers don’t care about this, they just want to login and use your software. So these keys are going to be the same for everyone, this means that most of these keys will need to be migrated out of the database and config files to App Settings (Environment variables).

9. Relational Database Upgrades


After all these years the hardest things to update are still databases. This is especially the case if you go for the database per tenant approach. You release your software, services are now running using the new version, now you decide to upgrade all database schemas, what if one of them fails? Do you roll back the entire release? When do you upgrade the databases? Do you ping every tenant and upgrade them? Do you do it when they login?

There are some tools that can help you with database upgrades: 

These tools automatically patch your databases to the correct version level.

However how you roll back, hot fix and upgrade your tenants is still up to you and your organisation.

What is my take? Use tools like Flyway, Evolutionary Database Design approach and roll forward only. Find a problem, fix it and release the hotfix.

10. Encryption


As you are storing your customers data in someone else’s data center chances are you will need to rereview your encryption strategy. Cloud providers are aware of this and to make your life easier they have released some features to assist you, for example here are few great encryption features from Azure: 

Once again, you need to rereview your requirements and think about what will work for you and your organisation. You don’t want to encrypt everything using Always Encrypted because you will not be able to do partial searches and your application’s performance will be impacted.

Please be aware that Always Encrypted feature will most likely not prevent SQL Injection attack, and you might not be able to use some of the new features from Microsoft if you use this technology, for example Temporal Tables, they currently don’t support Always Encrypted.

11. Key Store


If you are using a key store, you will need to update it to support multi tenancy.

12. Fault tolerance


There is a lot of noise out there about fault tolerance on the Cloud, you need to be aware 
of these three things: 
  • Integration error handling 
  • Retry strategies 
  • Circuit Breaker

Integration Error Handling

I throughly recommend that you do integration error handling failure analysis. 
If cache goes down, what happens? If Azure Service Bus goes down what will happen? It’s important that you design your app with failure in mind. So if cache goes down your app still works, it will just run a bit slower. If Azure Service Bus goes down, your app will continue to work but messages will not get processed.

Retry Strategy (Optional)
Retry strategy can help you with transient failure. In the early cloud days I’ve been told that these errors were common due to the infrastructure instability and noisy neighbour issues. These errors are a lot less common these days, however they still do happen. Good news is that Azure Storage SDK comes with retry strategy by default and you can use ReliableSqlConnection instead of SqlConnection to get the retry strategy when you connect to Azure SQL.

A custom retry policy was used to implement exponential backoff while also logging each 
retry attempt. During the upload of nearly a million files averaging four mb in size around 1.8% of files required at least one try - Cloud Architecture Patterns - Using Microsoft Azure

You don’t have to implement this when you migrate, you can accept that some of the users might get a transient failure. As long as you have integration error handling in place and you keep your app transactionally consistent your users can just retry manually.

Circuit breaker (Optional)

13. Say goodbye to two phase commit


On the PaaS Cloud you can’t have two phase commmit, it’s a concious CAP theorem trade off. This means no two phase commit for .NET or Java or whatever. This is not an Azure limitation.
Please take a second and think about how this will impact your application.

14. Point in time backups


Azure SQL comes with point in time backup, however Azure Storage doesn’t have point in time backup. So you either need to implement your own or use tool such as CherrySafe.

15. Accept Technical Debt


During the migration you will find more and more things that you would like to improve. 
However, the business reality is going to be that you will need to accept some technical debt. This is a good thing, you need to get product out there, see how it behaves and focus on the areas that actually need improvement. Make a list of things that you want to improve, get the most important things done before you ship, and during the subsequent releases just remove as much technical debt as you can.

What you think needs rewriting ASAP might change, as you convert your application you will find bigger more pressing things that you must improve, so don’t focus and commit to fixing the trivial things prematurely.

16. New Persistence Options (Optional)


As you migrate you should seriously think about persistence. Ask yourself the following questions: 
  • Do you really need to store everything in the relational database? 
  • Can you store binary files in the Azure Storage? 
  • What about JSON documents? Why not store them the DocumentDB?

This will increase migration complexity, so you might want to delay this and do this in the future. That’s understandable. But just so you know there are lots of reasons why you should do this.

Azure SQL is not cheap and DTUs are limited. Less you talk to the Azure SQL the better.  Streaming binary files to and from Azure SQL is expensive, it takes up storage space and DTUs, so why do it? Azure Storage is very cheap and it comes with Read-Access Geo Redundant Storage, so incase of a DR you can failover.

17. Monolith, Service-based or Microservice Architecture (Optional)

If you are migrating I will say this, don’t attempt at the same time to migrate over to Microservices. Do one thing, just get your application to the Cloud, enable multi tenancy, give business some value. After you have achieved your conversion milestone I would consider breaking your app down into smaller deployment modules.

Why? Because it’s complex, here are just a few things that you will need to think about: 
  • Data Ownership - How do different modules share data? Who owns the actual data? 
  • Integration - How are you going to integrate module A and B? 
  • API to API communication? 
  • Queues? 
  • Database Integration? 
  • Testing, how are you going to test the integration? Are you going to automate it? 
  • Scalability - How are you going to scale different modules? 
  • Transactions - In the Cloud there is no two phase commit, how are going to work around this?

18. Infrastructure As Code (Optional)

Now that you are going to the Cloud you need to look after one more thing. Your infrastructure provisioning will now be a config file. Someone will need to maintain this.

If you are using Azure you will define your infrastructure using Azure Resource Manager and PowerShell.

This is optional because you could set up your infrastructure manually (I don’t recommend this).

19. Gradual Release (Optional)


If you deploy your SaaS app and something goes wrong you will not want to take all of your customers down at the same time. This means you should consider having several installations. For example, you can have different URLs for different installations app.yourdomain.com and app2.yourapp.com. So when you deploy to app.yourdomain.com, if all goes well then you can promote your app to app2.yourapp.com. This is optional because you could just deploy out of hours to app.yourapp.com.

20. Deployment Pipeline (Optional)


You will need a deployment pipeline if you want to have confidence in your deployments. Don’t underestimate this step, depending on what services you are going to use and how complex your pipeline is, it can take weeks to set this up. Chances are you will need to change your config files, transform them for different installations, tokenize them. Deploy to staging slots and VIP switch the production, add error handling, establish promotion process, etc.

There are plenty of tools that can help you with this, I’ve been using Visual Studio Team Services to orchestrate builds and deployments.

21. Avoid Vendor Lock In (Optional)


Build your application in a such way that you can migrate with relative ease to another cloud provider. I know that no matter what you do, it will not be an easy journey,  however, if you actually think about it and design with it in mind, it might take 1-3 months to migrate from Azure to AWS and not 1 year. How? When you redesign your application abstract infrastructure components out so there are no direct dependencies on Azure and Azure SDKs.

Saturday, 10 December 2016

Azure Web App Application Initialization

If you have an Azure Web App that needs initializing before it starts receiving traffic and just calling the web server root "/" is not good enough (default behaviour) then you should be looking into custom "applicationInitialization". Before you read this article please take a look at the IIS Application Initialization documentation.

This article will cover 9 things that are not mentioned in the documentation.



This is how the web.config applicationInitialization XML config can look like:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <applicationInitialization doAppInitAfterRestart="true" skipManagedModules="true">
      <add initializationPage="/init1.aspx" hostName="somewebapp.azurewebsites.net"/>
      <add initializationPage="/init2.aspx" hostName="somewebapp.azurewebsites.net"/>
    </applicationInitialization>
  </system.webServer>
</configuration>
This config should be placed in the wwwroot.

What they don't mention in the documentation


1. You can have many initialization pages

As you can see in the example above you can actually initialize many pages e.g. init1.aspx, init2.aspx.

2. Initializations are done in order

As per above example, init1.aspx will be called first, then as soon as init1.aspx responds or times out it will move on to init2.aspx

3. Initializations are response ignorant

ApplicationInitilization doesn't care if init1.aspx returns 200, 404 or 503. As long as it returns something it will consider it's job done.
This means if your init1.aspx page times out, IIS will still think that you are good to go.

4. Can't have duplicate initialization page

You can't have duplicate keys, such as:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <applicationInitialization doAppInitAfterRestart="true" skipManagedModules="true">
      <add initializationPage="/init1.aspx" hostName="somewebapp.azurewebsites.net"/>
      <add initializationPage="/init1.aspx" hostName="somewebapp.azurewebsites.net"/>
    </applicationInitialization>
  </system.webServer>
</configuration>
This will make IIS error. You might think, why on earth would you want to have duplicate keys? This could be considered a workaround for the timeouts, if init1.aspx takes too long to respond, call it again, next time hopefully it will respond immediately. To workaround this you could just add query string:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <applicationInitialization doAppInitAfterRestart="true" skipManagedModules="true">
      <add initializationPage="/init1.aspx?call=1" hostName="somewebapp.azurewebsites.net"/>
      <add initializationPage="/init1.aspx?call=2" hostName="somewebapp.azurewebsites.net"/>
    </applicationInitialization>
  </system.webServer>
</configuration>

5. Initializations are done internally

IIS applicationInitialization requests are done internally so you will not see them in your W3C logs. However if your init1.aspx page looks at the request and takes a look at the User Agent it will see the following: "IIS Application Initialization Preload"

6. No client state is shared between calls

When IIS calls init1.aspx and then calls init2.aspx there is no common client state kept i.e. cookies, etc. This means you can't tag the IIS applicationInitialization client.

7. Initialization can warm up virtual applications


8. hostName is the same for all slots

This has really surprised me, one Web App can have many slots, each slot has it's own name, however you when you use initializationPage you can just specify the main slots hostName and this will work for all slots. So for example, if you have production slot and you have staging slot you might consider doing something like this:
Production Web.Confg
<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <applicationInitialization doAppInitAfterRestart="true" skipManagedModules="true">
      <add initializationPage="/init1.aspx?call=1" hostName="somewebapp.azurewebsites.net"/>
      <add initializationPage="/init1.aspx?call=2" hostName="somewebapp.azurewebsites.net"/>
    </applicationInitialization>
  </system.webServer>
</configuration>
Staging Web.Confg
<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <applicationInitialization doAppInitAfterRestart="true" skipManagedModules="true">
      <add initializationPage="/init1.aspx?call=1" hostName="somewebapp-staging.azurewebsites.net"/>
      <add initializationPage="/init1.aspx?call=2" hostName="somewebapp-staging.azurewebsites.net"/>
    </applicationInitialization>
  </system.webServer>
</configuration>
No need, this will work for both:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <applicationInitialization doAppInitAfterRestart="true" skipManagedModules="true">
      <add initializationPage="/init1.aspx?call=1" hostName="somewebapp.azurewebsites.net"/>
      <add initializationPage="/init1.aspx?call=2" hostName="somewebapp.azurewebsites.net"/>
    </applicationInitialization>
  </system.webServer>
</configuration>
This is good news, as you will need to perform less transformations.

9. Load Balancer

This might be one of the most important points. IIS applicationInitialization and Azure Load Balancers work together. As soon as IIS initializes all of the pages, internal Azure Web Apps load balancer is notified and it starts sending traffic to the fully initialized instance. So you can use applicationInitialization to warm up your application completely before you allow traffic to come in.

Conclusion

If you have a large app and you need to invoke many different areas to warm up caches, compilations, etc before your instance starts receiving traffic then IIS applicationInitialization is a good way to go!

As I don't work for Microsoft I can't guarantee that my description of Azure Web App inner workings is 100% accurate.

Monday, 29 August 2016

Applied Domain-Driven Design (DDD) - Event Logging & Sourcing For Auditing

In this article I am going to explore the use of Event Logging and Sourcing as a solution for domain auditing.  This article is not going to explore how to use Event Sourcing to obtain the current model state.

What is Event Logging?





In my previous article I've explored domain events. In that article they were synchronous unpersisted events. Aggregate root or Service would just raise an event and a handler would handle it. In this article we are going to change that, we are going to persist these domain events.






What is Event Sourcing? 


"Append-only store to record the full series of events that describe actions taken on data in a domain, rather than storing just the current state, so that the store can be used to materialize the domain objects. This pattern can simplify tasks in complex domains by avoiding the requirement to synchronize the data model and the business domain; improve performance, scalability, and responsiveness; provide consistency for transactional data; and maintain full audit trails and history that may enable compensating actions." - Event Sourcing Pattern Microsoft


Requirements domain Event Logging and Sourcing can fulfil:


  • As a technical support member of staff I would like to be able to view audit log so that I can find out what my customers did i.e. did they get themselves in to a mess or is it that our software is buggy?
  • As a system admin I would like to be able to view the audit log so that I can find out what my users are doing i.e. someone is not sure why something was changed, software admin needs to double check what happened. 
  • As a security analyst I would like to view audit log so that I can find out who has committed fraud. 
  • As a business expert I would like to find out how long it has taken someone to go through a process so that we can optimise it. 
  • As a security analyst I would like audit log to be immutable so that no one can tamper with it 
  • As a software engineer I would like to see what user has done so that I can re-produce their steps and debug the application. 
  • As a software engineer I would like persisted domain events to be forwarded to the queue as we can't have 2 phase commit in the Cloud.


Why not just use CQRS with Event Sourcing? 


As it was mentioned by Udi, CQRS is a pattern that should be used where data changes are competitive or collaborative. A lot of systems don't fall in to this category, even if they do, you would only use CQRS potentially with Event Sourcing (CQRS != Event Sourcing) for a part of the application and not everywhere. This means you can't have automatic audit for your entire system by using CQRS with Event Sourcing.

Event Sourcing is all about storing events and then sourcing them to derive the current model state.
If you don't need "undo" and "replay" functionality, and if you don't need to meet super high scalability non-functional requirements (which most likely you don't) why over-engineer?

This proposed solution is just logging events to get some of the benefits that Event Sourcing provides without the deriving the current model state. However, it will still be sourcing the events to obtain the audit log.


Why is this a good solution for auditing? 


Your domain is rich and full of domain events (domain event is something that has happened, it's an immutable fact and you are just broadcasting it). It's also written using ubiquitous language. Because it describes what has happened and what was changed it's a great candidate to meet your auditing, troubleshooting, debugging and 2 phase commit Cloud requirements.  


Pros:
  • It's fairly easy to create audit read model from domain events  
  • Domain events provide business context of what has happened and what has changed  
  • Reference data (Mr, Dr, etc) is stored in the same place so you can provide full audit read model 
  • Events can be written away to append only store 
  • Only useful event data is stored 

Cons:
  • Every request (command) must result in domain event and you need to flatten it, it's more development work
  • Requires testing 
  • Duplication of data. One dataset for current state. Second dataset for events. There might be mismatch due to bugs and changes. 

What about "proof of correctness"? 

Udi, has already discussed this here (scroll down to the "proof of correctness").

I recommend that you keep your storage transaction logs, it doesn't give you proof of correctness however it gives you extra protection. If someone bypasses your application and tampers with your data in the database at least it will be logged and you will be able to do something about it.


Domain event logging implementation example 


I am going to take my previous article and build upon it. I've introduced in the past this interface:

public interface IDomainEvent {}

IDomainEvent interface was used like this:

   
    public class CustomerCheckedOut : IDomainEvent
    {
        public Purchase Purchase { get; set; }
    }


We are going to change IDomainEvent to DomainEvent:

    
    public abstract class DomainEvent 
    {
        public string Type { get { return this.GetType().Name; } }

        public DateTime Created { get; private set; }

        public Dictionary<string, Object> Args { get; private set; }

        public DomainEvent()
        {
            this.Created = DateTime.Now;
            this.Args = new Dictionary<string, Object>();
        }

        public abstract void Flatten();
    }

This new DomainEvent will:
  1. Give you a timestamp for when domain event was created 
  2. Get the domain event name 
  3. Force events to flatten its payloads 
  4. Stores important arguments against the event 

Here is example implementation:
   
    public class CustomerCheckedOut : DomainEvent
    {
        public Purchase Purchase { get; set; }

        public override void Flatten()
        {
            this.Args.Add("CustomerId", this.Purchase.Customer.Id);
            this.Args.Add("PurchaseId", this.Purchase.Id);
            this.Args.Add("TotalCost", this.Purchase.TotalCost);
            this.Args.Add("TotalTax", this.Purchase.TotalTax);
            this.Args.Add("NumberOfProducts", this.Purchase.Products.Count);
        }
    }

Flatten method is used to capture important arguments against the event. How you flatten really depends on your requirements. For example if you want to store information for audit purposes, then above flatten might be good enough. If you want to store events so that you can "undo" or "replay" you might want to store more information.

Why have Flatten method at all? Why not serialise and store the entire "Purchase" object? This object might have many value objects hanging of it, it might also have an access to another aggregate root. You will end up storing a lot of redundant data, it will be harder to keep track of versions (if your object shape changes, which it will) and it will be harder to query. This is why Flatten method is important, it strips away all of the noise.

We don't want to handle all event flattening and persisting manually. To simplify and automate the event handling process I've introduced generic event handler:
  
    public class DomainEventHandle<TDomainEvent> : Handles<TDomainEvent>
        where TDomainEvent : DomainEvent
    {
        IDomainEventRepository domainEventRepository;

        public DomainEventHandle(IDomainEventRepository domainEventRepository)
        {
            this.domainEventRepository = domainEventRepository;
        }

        public void Handle(TDomainEvent args)
        {
            args.Flatten();
            this.domainEventRepository.Add(args);
        }
    }




Extending this to meet additional security and operational requirements  

You can take this few steps further and create a correlation id for the entire web request. This way you will be able to correlate IIS W3C logs, event logs and database logs. Find out how you can achieve this here

*Note: Code in this article is not production ready and is used for prototyping purposes only. If you have suggestions or feedback please do comment. 

Friday, 12 August 2016

Creating Custom Key Store Provider for SQL Always Encrypted (Without Key Vault Example PoC)

Recently we had to implement custom key store provider for always encrypted. We wanted it to access our own key store to retrieve the master key and to decrypt the column key.  It was not very clear how this can be achieved. So I've decided to produce a PoC and write an article about it.

Custom Key Store Provider for SQL Always Encrypted


Setup your C# application


Step 1:

Make sure your project is set to .NET Framework 4.6.

Step 2:

Implement your own custom store provider by extending  the SqlColumnEncryptionKeyStoreProvider and overriding the two methods:

 public class MyOwnCustomKeyStoreProvider : SqlColumnEncryptionKeyStoreProvider
    {

        string masterKeyThatWillNotBeHardcodedInYourApp = "someMasterKey";
        byte[] saltThatWillNotBeHardcodedInYourApp = UTF8Encoding.UTF8.GetBytes("someSalt");

        //This will constantly get used
        public override byte[] DecryptColumnEncryptionKey(string masterKeyPath, string encryptionAlgorithm, byte[] encryptedColumnEncryptionKey)
        {
            using (MemoryStream ms = new MemoryStream())
            {
                using (RijndaelManaged AES = new RijndaelManaged())
                {
                    AES.KeySize = 256;
                    AES.BlockSize = 128;

                    Rfc2898DeriveBytes keyBytes = new Rfc2898DeriveBytes(
                            masterKeyThatWillNotBeHardcodedInYourApp, 
                            saltThatWillNotBeHardcodedInYourApp, 
                            1000
                       );
                    AES.Key = keyBytes.GetBytes(AES.KeySize / 8);
                    AES.IV = keyBytes.GetBytes(AES.BlockSize / 8);

                    AES.Mode = CipherMode.CBC;

                    using (CryptoStream cs = new CryptoStream(ms, AES.CreateDecryptor(), CryptoStreamMode.Write))
                    {
                        cs.Write(encryptedColumnEncryptionKey, 0, encryptedColumnEncryptionKey.Length);
                        cs.Close();
                    }
                    encryptedColumnEncryptionKey = ms.ToArray();
                }
            }

            return encryptedColumnEncryptionKey;
        }

        //This will never get used by the app, I've used it just to encrypt the column key
        public override byte[] EncryptColumnEncryptionKey(string masterKeyPath, string encryptionAlgorithm, byte[] columnEncryptionKey)
        {
            byte[] encryptedBytes = null;
            using (MemoryStream ms = new MemoryStream())
            {
                using (RijndaelManaged AES = new RijndaelManaged())
                {
                    AES.KeySize = 256;
                    AES.BlockSize = 128;

                    Rfc2898DeriveBytes keyBytes = new Rfc2898DeriveBytes(
                            masterKeyThatWillNotBeHardcodedInYourApp,
                            saltThatWillNotBeHardcodedInYourApp,
                            1000
                       );

                    AES.Key = keyBytes.GetBytes(AES.KeySize / 8);
                    AES.IV = keyBytes.GetBytes(AES.BlockSize / 8);

                    AES.Mode = CipherMode.CBC;

                    using (CryptoStream cs = new CryptoStream(ms, AES.CreateEncryptor(), CryptoStreamMode.Write))
                    {
                        cs.Write(columnEncryptionKey, 0, columnEncryptionKey.Length);
                        cs.Close();
                    }
                    encryptedBytes = ms.ToArray();
                }
            }

            return encryptedBytes;
        }
    }


Step 3:

Register your provider with the SqlConnection:

            //Register your encryption key strategies 
            Dictionary<string, SqlColumnEncryptionKeyStoreProvider> providerStrategies =
                new Dictionary<string, SqlColumnEncryptionKeyStoreProvider>();

            providerStrategies.Add("MY_OWN_CUSTOM_KEY_STORE_PROVIDER", new MyOwnCustomKeyStoreProvider());

            SqlConnection.RegisterColumnEncryptionKeyStoreProviders(providerStrategies);

Step 4:

Now, pay attention. Make sure that your connection is configured correctly, I've spent several hours trying to figure out why my setup was not working. It was all because I did not include "Column Encryption Setting=Enabled" in the connection string:

 new SqlConnection("Server=tcp:some.database.windows.net,1433;Database=testing;User ID=testing@testing;Password=Password;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;Column Encryption Setting=Enabled")

If you  don't include Column Encryption Setting=Enabled, you will get unhelpful exception like this:

An unhandled exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll

Additional information: Operand type clash: nvarchar is incompatible with nvarchar(11) encrypted with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'MO_CEK1', column_encryption_key_database_name = 'sometest')


Incorrect parameter encryption metadata was received from the client. The error occurred during the invocation of the batch and therefore the client can refresh the parameter encryption metadata by calling sp_describe_parameter_encryption and retry.


Setup your database


Step 1:

Define your custom key store provider:

 CREATE COLUMN MASTER KEY [MO_CMKSP] --Stands for My Own Custom Key Store Provider
 WITH ( KEY_STORE_PROVIDER_NAME = 'MY_OWN_CUSTOM_KEY_STORE_PROVIDER', 
 KEY_PATH = 'MyKeyStoreWillNotUseThis')

Step 2:

Define the column encryption key that will get unwrapped by your own custom key store provider. Encrypted value needs to be some random value that gets encrypted by your master key and stored here as a hexadecimal:

 CREATE COLUMN ENCRYPTION KEY [MO_CEK1] -- Stands for My Own Column Encryption Key 1
 WITH VALUES
 (
  COLUMN_MASTER_KEY = [MO_CMKSP],
  ALGORITHM = 'RSA_OAEP',
  ENCRYPTED_VALUE = 0x29128e12266a71dd098bc3223b3bbf293a275b2ec8c13f97515f54dd7d2a54af46f37071e0e16e777d73f4a743ddb991
 )


Step 3:

Encrypt columns by specifying the column encryption key:

 CREATE TABLE [dbo].[Employee](
  [Id] [int] IDENTITY(1,1) NOT NULL,
  [SSN] [nvarchar](11) COLLATE Latin1_General_BIN2
  ENCRYPTED WITH (
   COLUMN_ENCRYPTION_KEY = [MO_CEK1],
   ENCRYPTION_TYPE = Deterministic,
   ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
  ) NOT NULL,
  [Salary][int] 
  ENCRYPTED WITH (
   COLUMN_ENCRYPTION_KEY = [MO_CEK1],
   ENCRYPTION_TYPE = RANDOMIZED,
   ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
  ) NOT NULL,
  PRIMARY KEY CLUSTERED
  (
  [Id] ASC
  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
 ) ON [PRIMARY]

 CREATE TABLE [dbo].[EmployeeExtraInformation](
  [Id] [int] IDENTITY(1,1) NOT NULL,
  [EyeColor] [nvarchar](11) NOT NULL,
  [SSN] [nvarchar](11) COLLATE Latin1_General_BIN2
  ENCRYPTED WITH (
   COLUMN_ENCRYPTION_KEY = [MO_CEK1],
   ENCRYPTION_TYPE = Deterministic,
   ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
  ) NOT NULL,
  PRIMARY KEY CLUSTERED
  (
  [Id] ASC
  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
 ) ON [PRIMARY]


PoC Code


Program.cs

using System;
using System.Collections.Generic;
using System.Data.SqlClient;

namespace CustomKeyStoreProvider
{
    class Program
    {
        static void Main(string[] args)
        {
            //Register your encryption key strategies 
            Dictionary<string, SqlColumnEncryptionKeyStoreProvider> providerStrategies =
                new Dictionary<string, SqlColumnEncryptionKeyStoreProvider>();

            providerStrategies.Add("MY_OWN_CUSTOM_KEY_STORE_PROVIDER", new MyOwnCustomKeyStoreProvider());


            //Apparently this works transparently with the Hibernate and the Entity Framework!
            SqlConnection.RegisterColumnEncryptionKeyStoreProviders(providerStrategies);

            using (SqlConnection connection = new SqlConnection({Your connection string};Column Encryption Setting=Enabled))
            {
                connection.Open();
                string ssn;
                using (SqlCommand command = connection.CreateCommand())
                {
                    command.CommandText = "INSERT INTO [dbo].[Employee] VALUES (@ssn, @salary)";
                    Random rand = new Random();
                    ssn = string.Format(@"{0:d3}-{1:d2}-{2:d4}", rand.Next(0, 1000), rand.Next(0, 100), rand.Next(0, 10000));
                    command.Parameters.AddWithValue("@ssn", ssn);
                    command.Parameters.AddWithValue("@salary", 18000);
                    command.ExecuteNonQuery();
                }

                using (SqlCommand command = connection.CreateCommand())
                {
                    command.CommandText = "INSERT INTO [dbo].[EmployeeExtraInformation] (eyecolor, ssn) VALUES (@eyecolor, @ssn)";
                    command.Parameters.AddWithValue("@eyecolor", "blue");
                    command.Parameters.AddWithValue("@ssn", ssn);
                    command.ExecuteNonQuery();
                }

                //Show stored data unencrypted 
                using (SqlCommand command = connection.CreateCommand())
                {
                    command.CommandText = "SELECT [id], [ssn], [salary] FROM [dbo].[Employee]";
                    using (SqlDataReader reader = command.ExecuteReader())
                    {
                        if(reader.HasRows)
                        {
                            Console.WriteLine("-- Showing all rows:");
                            while (reader.Read())
                            {
                                Console.WriteLine($"id : {reader["id"]}, ssn : {reader["ssn"]}, salary : {reader["salary"]}");
                            }
                        }
                    }
                }

                //Equals search, this actually works
                using(SqlCommand command = connection.CreateCommand())
                {
                    command.CommandText = "SELECT [id], [ssn], [salary] FROM [dbo].[Employee] WHERE [ssn] = @ssn";
                    command.Parameters.AddWithValue("@ssn", ssn);

                    using (SqlDataReader reader = command.ExecuteReader())
                    {
                        if (reader.HasRows)
                        {
                            Console.WriteLine($"-- Showing found record for ssn {ssn}:");
                            while (reader.Read())
                            {
                                Console.WriteLine($"id : {reader["id"]}, ssn : {reader["ssn"]}, salary : {reader["salary"]}");
                            }
                        }
                    }
                }

                //Inner Join, this works as well
                using (SqlCommand command = connection.CreateCommand())
                {
                    command.CommandText = @"SELECT [dbo].[Employee].[salary], [dbo].[Employee].[ssn], [dbo].[EmployeeExtraInformation].[eyecolor] FROM [dbo].[Employee] 
                                                    INNER JOIN [dbo].[EmployeeExtraInformation] ON [dbo].[Employee].[ssn] = [dbo].[EmployeeExtraInformation].[ssn]";

                    using (SqlDataReader reader = command.ExecuteReader())
                    {
                        if (reader.HasRows)
                        {
                            Console.WriteLine($"-- Showing all records inner joined:");
                            while (reader.Read())
                            {
                                Console.WriteLine($"eyecolor : {reader["eyecolor"]}, ssn : {reader["ssn"]}, salary : {reader["salary"]}");
                            }
                        }
                    }
                }

                try
                {
                    using (SqlCommand command = connection.CreateCommand())
                    {
                        command.CommandText = "SELECT [id], [ssn], [salary] FROM [dbo].[Employee] WHERE [ssn] like @ssn";
                        command.Parameters.AddWithValue("@ssn", ssn);

                        command.ExecuteReader();
                    }
                }
                catch (Exception ex)
                {
                    Console.WriteLine("-- As expected, can't search on ssn using like:");
                    Console.WriteLine(ex.Message);
                }

                try
                {
                    using (SqlCommand command = connection.CreateCommand())
                    {
                        command.CommandText = "SELECT [id], [ssn], [salary] FROM [dbo].[Employee] WHERE [salary] = @salary";
                        command.Parameters.AddWithValue("@salary", 18000);

                        command.ExecuteReader();
                    }
                }
                catch(Exception ex)
                {
                    Console.WriteLine("-- As expected, can't search on salary, it is a randomized field:");
                    Console.WriteLine(ex.Message);
                }

                connection.Close();
            }

            Console.ReadLine(); 
        }
    }
}

MyOwnCustomKeyStoreProvider.cs

using System.Data.SqlClient;
using System.IO;
using System.Security.Cryptography;
using System.Text;

namespace CustomKeyStoreProvider
{
    public class MyOwnCustomKeyStoreProvider : SqlColumnEncryptionKeyStoreProvider
    {

        string masterKeyThatWillNotBeHardcodedInYourApp = "someMasterKey";
        byte[] saltThatWillNotBeHardcodedInYourApp = UTF8Encoding.UTF8.GetBytes("someSalt");

        //This will constantly get used
        public override byte[] DecryptColumnEncryptionKey(string masterKeyPath, string encryptionAlgorithm, byte[] encryptedColumnEncryptionKey)
        {
            using (MemoryStream ms = new MemoryStream())
            {
                using (RijndaelManaged AES = new RijndaelManaged())
                {
                    AES.KeySize = 256;
                    AES.BlockSize = 128;

                    Rfc2898DeriveBytes keyBytes = new Rfc2898DeriveBytes(
                            masterKeyThatWillNotBeHardcodedInYourApp, 
                            saltThatWillNotBeHardcodedInYourApp, 
                            1000
                       );
                    AES.Key = keyBytes.GetBytes(AES.KeySize / 8);
                    AES.IV = keyBytes.GetBytes(AES.BlockSize / 8);

                    AES.Mode = CipherMode.CBC;

                    using (CryptoStream cs = new CryptoStream(ms, AES.CreateDecryptor(), CryptoStreamMode.Write))
                    {
                        cs.Write(encryptedColumnEncryptionKey, 0, encryptedColumnEncryptionKey.Length);
                        cs.Close();
                    }
                    encryptedColumnEncryptionKey = ms.ToArray();
                }
            }

            return encryptedColumnEncryptionKey;
        }

        //This will never get used by the app, I've used it just to encrypt the column key
        public override byte[] EncryptColumnEncryptionKey(string masterKeyPath, string encryptionAlgorithm, byte[] columnEncryptionKey)
        {
            byte[] encryptedBytes = null;
            using (MemoryStream ms = new MemoryStream())
            {
                using (RijndaelManaged AES = new RijndaelManaged())
                {
                    AES.KeySize = 256;
                    AES.BlockSize = 128;

                    Rfc2898DeriveBytes keyBytes = new Rfc2898DeriveBytes(
                            masterKeyThatWillNotBeHardcodedInYourApp,
                            saltThatWillNotBeHardcodedInYourApp,
                            1000
                       );

                    AES.Key = keyBytes.GetBytes(AES.KeySize / 8);
                    AES.IV = keyBytes.GetBytes(AES.BlockSize / 8);

                    AES.Mode = CipherMode.CBC;

                    using (CryptoStream cs = new CryptoStream(ms, AES.CreateEncryptor(), CryptoStreamMode.Write))
                    {
                        cs.Write(columnEncryptionKey, 0, columnEncryptionKey.Length);
                        cs.Close();
                    }
                    encryptedBytes = ms.ToArray();
                }
            }

            return encryptedBytes;
        }
    }
}


Setup.sql

CREATE COLUMN MASTER KEY [MO_CMKSP] --Stands for My Own Custom Key Store Provider
 WITH ( KEY_STORE_PROVIDER_NAME = 'MY_OWN_CUSTOM_KEY_STORE_PROVIDER', 
 KEY_PATH = 'MyKeyStoreWillNotUseThis')

GO 

CREATE COLUMN ENCRYPTION KEY [MO_CEK1] -- Stands for My Own Column Encryption Key 1
 WITH VALUES
 (
  COLUMN_MASTER_KEY = [MO_CMKSP],
  ALGORITHM = 'RSA_OAEP',
  ENCRYPTED_VALUE = 0x29128e12266a71dd098bc3223b3bbf293a275b2ec8c13f97515f54dd7d2a54af46f37071e0e16e777d73f4a743ddb991
 )
GO

CREATE TABLE [dbo].[Employee](
  [Id] [int] IDENTITY(1,1) NOT NULL,
  [SSN] [nvarchar](11) COLLATE Latin1_General_BIN2
  ENCRYPTED WITH (
   COLUMN_ENCRYPTION_KEY = [MO_CEK1],
   ENCRYPTION_TYPE = Deterministic,
   ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
  ) NOT NULL,
  [Salary][int] 
  ENCRYPTED WITH (
   COLUMN_ENCRYPTION_KEY = [MO_CEK1],
   ENCRYPTION_TYPE = RANDOMIZED,
   ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
  ) NOT NULL,
  PRIMARY KEY CLUSTERED
  (
  [Id] ASC
  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
 ) ON [PRIMARY]

 CREATE TABLE [dbo].[EmployeeExtraInformation](
  [Id] [int] IDENTITY(1,1) NOT NULL,
  [EyeColor] [nvarchar](11) NOT NULL,
  [SSN] [nvarchar](11) COLLATE Latin1_General_BIN2
  ENCRYPTED WITH (
   COLUMN_ENCRYPTION_KEY = [MO_CEK1],
   ENCRYPTION_TYPE = Deterministic,
   ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'
  ) NOT NULL,
  PRIMARY KEY CLUSTERED
  (
  [Id] ASC
  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
 ) ON [PRIMARY]


Useful links:

*Note: Code in this article is not production ready and is used for prototyping purposes only. If you have suggestions or feedback please do comment. 

Sunday, 7 August 2016

How to use TypeScript with FlotCharts or any other external JavaScript library

I've started using TypeScript recently and I wanted to interact with FlotCharts. I had two questions on my mind:
  1. How does non TypeScript library interact with TypeScript app?
  2. How do I use FlotCharts now? Has the interface completely changed? 
This article will answer these questions and provide you with two implementation examples.


Step 1:

Get the FlotChart Type definition file. You can do this via NuGet by invoking this command: 
Install-Package flot.TypeScript.DefinitelyTyped. 

Type definition file is just an interface that is used to interact with the native JavaScript code. 


Step 2:

Get the actual FlotCharts library. You can do this via NuGet by invoking this command:
Install-Package flot 


Step 3:

Check your "Scripts" folder structure, it should look something like this:



Step 4:

Take a look at the query.flot interfaces, they can be found here: Scripts/Typings/flot/jquery.flot.d.


Step 5 - Implementation:


Explicit Approach
If you are a strong type purist you can go all the way and actually implement the defined interfaces like so:
class DataSeries implements jquery.flot.dataSeries {

    label: string;
    data: Array<Array<number>> = new Array<Array<number>>();

    constructor(label: string, data: Array<Array<number>>) {
        this.label = label;
        this.data = data;
    }
}

class PlotOptions implements jquery.flot.plotOptions {
    grid: jquery.flot.gridOptions;
    constructor(grid: jquery.flot.gridOptions) {
        this.grid = grid;
    }
}

class GridOptions implements jquery.flot.gridOptions {
    show: boolean;
    constructor(show: boolean) {
        this.show = show;
    }
}
Once you have implemented your classes, you can interact with the FlotChart library like this:
    let dataSeriesA: DataSeries = new DataSeries("A", [[0, 10], [1, 20], [2, 30]]);
    let dataSeriesB: DataSeries = new DataSeries("B", [[0, 5], [1, 3], [2, 50]]);
    let plotElement: JQuery = jQuery("#plot");
    jQuery.plot(plotElement, [dataSeriesA, dataSeriesB], new PlotOptions(new GridOptions(false)));

Implicit Approach
If you don't want to implement classes and just want to provide objects you can interact with the FlotChart library like this instead:
    $.plot(
        $("#plot"),
        [
            { label: "A", data: [[0, 10], [1, 20], [2, 30]] },
            { label: "B", data: [[0, 5], [1, 3], [2, 50]] }
        ],
        {
            grid: {
                show : false
            }
        } 
    );

Sample code


App.ts:
 

class DataSeries implements jquery.flot.dataSeries {
    label: string;
    data: Array<Array<number>> = new Array<Array<number>>();
    constructor(label: string, data: Array<Array<number>>) {
        this.label = label;
        this.data = data;
    }
}

class PlotOptions implements jquery.flot.plotOptions {
    grid: jquery.flot.gridOptions;
    constructor(grid: jquery.flot.gridOptions) {
        this.grid = grid;
    }
}

class GridOptions implements jquery.flot.gridOptions {
    show: boolean;
    constructor(show: boolean) {
        this.show = show;
    }
}

function explicit() {
    let dataSeriesA: DataSeries = new DataSeries("A", [[0, 10], [1, 20], [2, 30]]);
    let dataSeriesB: DataSeries = new DataSeries("B", [[0, 5], [1, 3], [2, 50]]);
    let plotElement: JQuery = jQuery("#plotE");
    jQuery.plot(plotElement, [dataSeriesA, dataSeriesB], new PlotOptions(new GridOptions(false)));
}

function implicit() {
    $.plot(
        $("#plotI"),
        [
            { label: "A", data: [[0, 10], [1, 20], [2, 30]] },
            { label: "B", data: [[0, 5], [1, 3], [2, 50]] }
        ],
        {
            grid: {
                show : false
            }
        }
    );
}

window.onload = () => {
    explicit();
    implicit();
};


Index.html:
 
<!DOCTYPE html>

<html lang="en">
<head>
    <meta charset="utf-8" />
    <title>TypeScript With Flot Demo</title>
    <script src="Scripts/jquery-1.4.1.js"></script>
    <script src="Scripts/flot/jquery.flot.js"></script>
    <script src="app.js"></script>
</head>
<body>
    <h1>TypeScript with Flot demo</h1>

    <div class="plot-container">
        <div id="plotE" style="width:500px;height:500px;"></div>
        <div id="plotI" style="width:500px;height:500px;"></div>
    </div>

</body>
</html>

Summary:

  1. TypeScript apps interact with non TypeScript libraries through definition files.
  2. Library interfaces remain mostly the same. 
  3. You can interface explicitly by actually implementing reusable classes or implicitly by using objects.

Useful links:

*Note: Code in this article is not production ready and is used for prototyping purposes only. If you have suggestions or feedback please do comment. 

Wednesday, 1 June 2016

Applied Domain-Driven Design (DDD), Part 7 - Read Model

When I first started using DDD I came across a really messy situation. I had my aggregate root and it linked it self to another child aggregate root. Everything worked really well. Shortly after everything was written new requirement came through, I had to expose counts and sums of data based on different filters. This was very painful, I ended up modifying my aggregate roots to try and provide these additional properties. This approach did not perform, for each aggregate root, it was loading another aggregate root with entities and summing them. I've played around with NHibernate mapping files and I've managed to make it performant. By this point I've optimized NHibernate mapping files and my aggregate roots were polluted with query methods. I really didn't like this approach. Shortly after I've came up with another idea, how about we create an immutable model that maps directly to the SQL view and we let the infrastructure handle the mapping? This way our aggregate roots will remain unaffected and we will get much better performance through SQL querying! This is when I have discovered the read model.

In this article we are going to explore how we can end up in this messy situation and why you should use the read model for data mash up and summarisation.

Let's recap how our fictional domain model looks like (omitted to show properties only):
  
    public class Customer : IDomainEntity
    {
        private List<purchase> purchases = new List<purchase>();

        public virtual Guid Id { get; protected set; }
        public virtual string FirstName { get; protected set; }
        public virtual string LastName { get; protected set; }
        public virtual string Email { get; protected set; }

        public virtual ReadOnlyCollection<purchase> Purchases { get { return this.purchases.AsReadOnly(); } }
    }

    public class Purchase
    {
        private List<purchasedproduct> purchasedProducts = new List<purchasedproduct>();

        public Guid Id { get; protected set; }
        public ReadOnlyCollection<purchasedproduct> Products
        {
            get { return purchasedProducts.AsReadOnly(); }
        }
        public DateTime Created { get; protected set; }
        public Customer Customer { get; protected set; }
        public decimal TotalCost { get; protected set; }
    }

    public class PurchasedProduct
    {
        public Purchase Purchase { get; protected set; }
        public Product Product { get; protected set; }
        public int Quantity { get; protected set; }
    }
Please notice the deep relationship between Customer, Purchase and Purchased Product.

New Requirement 
Back office team has just come up with a brand new requirement. They need to get a list of customers that have made purchases, they want to see how much they have spent overall and how many products they have purchased. They are going to contact these customers, thank them for their custom, ask them few questions and give them discount vouchers.

Here is the DTO that we will need to populate and return back via API:
    
    public class CustomerPurchaseHistoryDto
    {
        public Guid CustomerId { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; }
        public int TotalPurchases { get; set; }
        public int TotalProductsPurchased { get; set; }
        public decimal TotalCost { get; set; }
    }

#Approach 1 - Domain Model DTO Projection
   
        public List<CustomerPurchaseHistoryDto> GetAllCustomerPurchaseHistory()
        {
            IEnumerable<Customer> customers =
                 this.customerRepository.Find(new CustomerPurchasedNProductsSpec(1));

            List<CustomerPurchaseHistoryDto> customersPurchaseHistory =
                new List<CustomerPurchaseHistoryDto>();

            foreach (Customer customer in customers)
            {
                CustomerPurchaseHistoryDto customerPurchaseHistory = new CustomerPurchaseHistoryDto();
                customerPurchaseHistory.CustomerId = customer.Id;
                customerPurchaseHistory.FirstName = customer.FirstName;
                customerPurchaseHistory.LastName = customer.LastName;
                customerPurchaseHistory.Email = customer.Email;
                customerPurchaseHistory.TotalPurchases = customer.Purchases.Count;
                customerPurchaseHistory.TotalProductsPurchased =
                    customer.Purchases.Sum(purchase => purchase.Products.Sum(product => product.Quantity));
                customerPurchaseHistory.TotalCost = customer.Purchases.Sum(purchase => purchase.TotalCost);
                customersPurchaseHistory.Add(customerPurchaseHistory);

            }
            return customersPurchaseHistory;
        } 

With this approach we have to get every customer, for that customer get their purchases, for that purchase get the products that were actually purchased and then sum it all up (lines 16-19). That's a lot of lazy loading. You could fine tune your NHibernate mapping so that it gets all of this data using joins in one go. However that will mean you will be getting unnecessary child data when you are interested only in the parent data (Customer).  Also what if your domain-model is not exposing some of the data that you would like summarise? Now you have to add extra properties to your aggregate roots to make this work. Messy.

#Approach 2 - Infrastructure Read Model Projection 
    
    /*Read only model, I don't think read models should have "readmodel" suffix. 
    We don't suffix Customer, we don't write CustomerDomainModel or CustomerModel we just write Customer. 
    We do this because it's part of the ubiquitous language, same goes for the CustomerPurchaseHistory. 
    I've added this suffix here just to make things more obvious. */
    public class CustomerPurchaseHistoryReadModel
    {
        public Guid CustomerId { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public int TotalPurchases { get; set; }
        public int TotalProductsPurchased { get; set; }
        public decimal TotalCost { get; set; }
    }

    public List<CustomerPurchaseHistoryDto> GetAllCustomerPurchaseHistory()
    {
        IEnumerable<CustomerPurchaseHistoryReadModel> customersPurchaseHistory =
                this.customerRepository.GetCustomerPurchaseHistory();

        return AutoMapper.Mapper.Map<IEnumerable<CustomerPurchaseHistoryReadModel>, List<CustomerPurchaseHistoryDto>>(customersPurchaseHistory);
    }

    interface ICustomerRepository : IRepository<Customer>
    {
        IEnumerable<CustomerPurchaseHistoryReadModel> GetCustomersPurchaseHistory();
    }

    public class CustomerNHRepository : ICustomerRepository
    {
        public IEnumerable<CustomerPurchaseHistoryReadModel> GetCustomersPurchaseHistory()
        {
            //Here you either call a SQL view, do HQL joins, etc.
            throw new NotImplementedException();
        }
    }

In this example we have created CustomerPurchaseHistoryReadModel which is identical to CustomerPurchaseHistoryDto, which means I can keep things simple and just use AutoMapper to do one to one mapping. I've extended IRepository by creating new interface ICustomerRepository and added custom method GetCustomersPurchaseHistory(). Now I need to fill in CustomerNHRepository.GetCustomersPurchaseHistory() method. As we are now in the infrastructure layer we can just write some custom HQL or query a SQL view.


Summary:
  • Don't use your entities and aggregate roots for properties mush up or summarisation. Create read models where these properties are required. 
  • Infrastructure layer should take care of the mapping. For example, use HQL to project data on to your read model.
  • Reads models are just that, read only models. This is why they are performant and this is why they should have no methods on them and just properties (they are immutable). 

Useful links: 

*Note: Code in this article is not production ready and is used for prototyping purposes only. If you have suggestions or feedback please do comment.