Thursday 3 September 2020

Lead Time Driven Delivery - Part 4, Stabilise through embedded testing

Before you read this please read prerequisite Focus on results, not methods blog post as it briefly explains the basic scientific thinking that will be used here.

How do you know if a piece of process, software, hardware, concept or idea will behave in a correct way? Also, how do you know if this thing will meet the required quality, performance and reliability levels? Well, it is all about knowing how this thing will behave under certain conditions and more specifically it is about knowing when something will work and when it will fail.

To make this a bit more concrete let’s imagine that a customer with a lot of money went to two different software houses, one is called Henry’s Software and the other one called Adam’s Apps. Customer asked them to develop an identical software. To keep things simple, we will focus on a specific requirement. Here is what these two companies have written down for an identical requirement.

Henry’s Software: As a user when I open the mobile app for the first time, I would like to be able to quickly and easily connect to sign into my companies account.

Acceptance Criteria:
  • User enters information that he/she knows
  • User is directed to the relevant login screen
  • User puts in username and password
  • User successfully is directed to company account.

Adam’s Apps: As a user when I open the mobile app for the first time, I would like to be able to quickly and easily sign into my companies account.

Acceptance Criteria:
  • User knows the name of the (1) company that he/she works for or their (2) corporate email address.
  • Story needs to deliver an experience that facilitates login with just one piece of information, there is no need for 2.
  • If email address is used, full valid email address needs to be provided before the company is looked up.
  • If company name is used at least 3 letters need to be entered before company is looked up, this is done to slow down the enumeration attack.
  • If email address or company name detects more than 1 company, then list of companies is given so that user can select which one they will be given a login screen for.
  • Once user clicks on the company user gets redirected to companies configured authentication provider for login.
  • If user cancels out of the login screen, then they will get redirected back to the company selection screen.
  • After 6 attempts to provide company information or email user will be asked to wait for 30 seconds, then 1 minute then 2 minutes, following [30 seconds * num of attempts], all the way to 24 attempts.
  • Company look up approach must be discussed with senior customer support team member(s) to ensure that it will result in least amount of customer support calls. Their feedback needs to be documented.
  • All web requests must not take more than 2 seconds.

Remember, it is exactly the same requirement. Henry’s Software requirement documentation is vague, it does not provide specific information that can be used to verify and test what was delivered. While Adam’s Apps does capture user behaviour, expectation, delivery options, prerequisites and system performance. Adam’s Apps can establish specific test criteria for this feature and verify it when it is delivered.

Henry’s Software might say that Adam’s Apps documentation is heavy and too specific, their argument might be:

  1. The customer is always available, requirements will iteratively emerge or will be discussed (see below).
  2. Conversation over documentation, team knows what was discussed so we don’t need to be explicit also remember that customer is always available, it is possible to reconfirm.
  3. Team is trusted to make a right decision, there is no need to be so specific.

Of course, we do want customers or business analysts (who represent the customer) to be always available, but they are not due to holidays, meetings and competing corporate priorities. Individuals might have discussed this requirement with customer, however, these individuals might leave, go on a holiday, get sick, have to do other work, which means story might be picked up by someone who has no context. This means this individual will not be able to fill in the assumptions and in turn make mistakes which will cause rework. If business analysts or whoever writes the story knows the criteria, they should write it and not assume assumptions as known. Finally, number 3, trust does not mean that team should not take the vague requirement and refine it to be testable, at the end of the day they will still need to test it.

The main argument of this section is not about requirements documentation, it is about testing. Regardless of the format of how we write requirements down, I hope we can agree that when you know under what conditions something will work and fail then it is possible to test the piece of process, feature, concept, idea or hardware under specific test criteria.

As software people we all want to deliver great software experiences on time to our users. However, so many projects overrun, things go wrong for millions of reasons. Normally something goes wrong early in the process and then it cascades issues downstream, these issues are normally systemic i.e. they are bugs in your delivery process. They normally emerge because process is: non existent, not followed, opaque, regressed, out of date or poorly designed.

Volatility : liability to change rapidly and unpredictably, especially for the worse.

It can also be hard for managers to see where issue has actually stemmed from as people are looking at the whole thing and not the individual parts. One of the ways to increase predictability and transparency in your process is to break it down into component parts and then exposing each component part to a test criteria. Why would you want to do this? Remember the reason why we are doing any of this is because we are trying to reduce the lead time. Additionally, further the faulty feature travels through your development lifecycle more costly it is to fix it, it adds more lead time to the faulty feature, and it adds additional lead time to other features in the queue! This means we want to catch bugs as soon as possible and release work into the next stage only if it has passed all of the relevant test criteria. This way you will stabilise your delivery and reduce lead time across your entire development lifecycle.

You might be thinking, but we have documentation for all of this stuff. We know how things should work. We have definition of done, ready, test plans and so on. Yes, the problem is that normally this documentation is outside of your development lifecycle. It is a separate piece of document that you need to read and let’s face it, you probably don’t read it often enough. You probably refresh yourself once in a while just before the auditors knock on your door. The real challenge is to embed this process documentation into your development lifecycle so that system becomes self-checking, testing and auditing. To make what I am saying more concrete, here are few examples of what you can do to enable this:

  • Embedded checklist - If you are using some sort of Application Lifecycle Management software then consider embedding a version of your definition of done / ready into the Epic, Feature, Story or Task as a checklist or validation rules. When individual fills in the Epic in they have to confirm that they have done X, Y and Z, or they can’t complete the work until something is done.
  • Public self-accountability - I don’t know about you, but when I have to email large group of people update on the project, I 100% want to get my facts right. When we publicly report something, we tend to be more transparent, accountable and self-governing. Normally, we don’t want to lose face. This means it can be a good idea to get teams to send out fortnightly project updates to senior stakeholders. There are many ways that this can be used.
  • KPIs - If you know how you and your peers are being evaluated then you will change your behaviour around that evaluation. In this case you and your team should be evaluated against Lead Time.
  • Automation - It should be no surprise that when processes are automated correctly than reliability and speed of these processes drastically improve. Conceptually what you want to do is have your entire "Development Lifecycle As Code", this means that if process does not need human creativity it should be standardised and automated away (where appropriate).

Testing in development lifecycle is not just about writing down testable requirements like we did for Henry’s Software and Adam’s Apps above. It is about embedding quality controls throughout the process, and these quality controls might look nothing like you would expect. You might not even think of them as quality controls. Do you think of automation as a quality control? Routine email being sent out by an individual? Your weekly stakeholder update meeting? What about your morning stand-up? These are all form of quality controls designed to catch faults and problems in your process.

No comments:

Post a Comment