TECHNOLOGY STRATEGIST • TRANSFORMATIVE AGILE LEADER

Team Velocity Doesn’t Really Matter

First thought? That is crazy talk! You probably heard from just about everyone you know that you want the best velocity and to maximize it as much as possible. But what if I told you that a team delivering 15 points with a good burn down on average might actually be better than 30 points of velocity with also a good burn down on average?

 

The question is, out of that velocity, is the team actually producing something that we can learn from and discover? Certainly every iteration does not provide work that can go directly to production, but was the full value of the velocity something which can be demoed to the product owner, stake holders and maybe even customers? This is where value comes in. After all, the whole reason we do Agile is to iterate right? And by iterate, we don't mean iterate in time boxes until a project is complete (there is no value in just that exercise). And if we are not iterating on the resulting product increment that should be yielded at the end of every iteration, then what are we iterating on, besides simply breaking a project into time boxes?

As an engineering leader, Product Owner, ScrumMaster, etc - if you are using the teams velocity by itself to indicate if a team is performing well, by itself, then you might need to look at the actual value being produced out of that velocity.

In my example above, the team that produces the 15 points of velocity on average, but has a fully operational product increment which can be demoed in its entirety for people in the organization and customers is MUCH more performing than a team that produces double, triple, or anything above that and only has a small amount of their stories which make up a demo-able product increment.

The reason why we do Agile in the first place is so we can iterate AND LEARN. If we have a 3 month estimated project SWAG and we do not get to a demo-able product increment until half way through the project, then it means we are not learning, getting feedback, and unable to pivot into a different direction or make enhancements within a reasonable time. It means we are resistant to change. That means heavier refactoring because the builds are bigger, longer timelines because so much scope gets added to the project late, and a much more expensive capital investment into the thing you are trying to build. Where is the ROI from this thing we call Agile/Scrum/Lean/etc?

There are many factors which go into a teams velocity, but here are some I want to focus on for the sake of this article:

  • Failure for a team to self organize or have a lack of empowerment to do so
  • Poorly crafted stories which result in a product increment that provides no end-user value
  • Poorly defined release initiatives which put all the discovery and feedback risk towards the end of the project, when it is too late.

I myself have definitely gone down the route of defining a backlog and then slicing it up into smaller parts, thinking that was the magical solution. If we had smaller parts, then it would be easier to estimate, predict and deal with change. But while I was attempting to create a project which was robust to handle the change, I should have been focusing on a project that was more resilient and anti-fragile.

The key to a performing team, is also understanding what makes up a team. In software development, the team is the Product Owner, Software Developers, UX, UI, Testers and ANYONE else which will have skin in the game for the projects completion. To have a performing team on the project, you need everyone involved, from the moment the epics are first conceived.

The trick to having a velocity that matters, is by having a project with well defined vertical sliced stories which are contained in a minimum MARKETABLE feature definition. Some people like to say MVP. But that is misleading and many people don't understand what it means. However, if you say, MMF, that in itself means something -- because it has to be marketable. While a Product Owner might have a huge initiative to tackle, you can show me anything of any size and I can show you many marketable features which can be emerged early so we can discover and learn quickly and truly iterate to create an amazing product and innovate.

Take the proceeding example in a big monolithic application, where it would be hard to conceive getting value out of the initiative until it was done:

Strategic Initiative: Remediate Security Exploits Identified by Security Company XYZ

Project Description: Remediate CSRF Vulnerability Across Entire Application

Lets use an example where you had a very old application. perhaps consisting of 3 or 4 different architectures, none of which had any framework coupling or even decoupled services to leverage shared libraries components or services between them.

How would you tackle it? Maybe tackle each architecture individually as each being its own project? Maybe creating the base architecture to allow for CSRF remediation in each one and then start patching each feature? In these cases maybe learning how the last iteration did and then updating and changing the rest of your project timeline to represent the slip in progress? But where and when would you get the value, which in this case is CSRF being resolved for a feature? What if the project was estimated to take at at least 6 months just to remediate 1 of the architectures? Would you wait 6, 12 or 18 months to learn that CSRF was successfully remediated? What happens if it was not - do you continue to invest into something when you already have your next roadmap item planned and customers breathing down your neck?

I think you get the point.

Instead, flip your thinking. How can you and the team position the work, so you can get immediate feedback and results after the very first iteration so you can understand what I learned and apply it to the rest of the project? That is the true nature of iterating.

This doesn't just take a good product owner by the way. It takes a good team. The product owner and the rest of the team to tackle this problem and self organize to learn rapidly, with every potentially shippable product increment.

In the example of the CSRF security problem where we might have thousands of forms across the three architectures, I might break up the work like this, under my project "Remediate CSRF Vulnerability Across Entire Application"

MMF EPIC 1 Titled:
V1, V2, V3 Architecture Solution + Login Page (V1), Add User Form (V2), Add License Service (V3) - MMF

Description: Establish an architecture solution which can remediate CSRF across the 3 architectures and create a POC with the three CSRF vulnerable access points across those architectures.

Value Description: CSRF will be remediated in the Login Page, Add User Form and Add License Service.

Now, the important thing here is, that this is an MMF. Meaning that when it is done, in just an iteration, we can go straight to production. And when we go to production, we learn just how effective our solution is, and can use the results across the three architectures with 3 real word end -user examples to improve the definition and outcome of the rest of the project.

Now hold on a second - what if the architectures were so unwieldy that the architecture alone would take multiple iterations? Easy, run an experiment, or as some people call it "A Spike" or a "Proof of Concept". Apply your hypothesis and accomplish the same goal, just in a smaller vertical slice. The goal here is value and learning.

You can also slice up the MMF into smaller MMFs. We could have for example created an MMF for each of the architectures and 1 single CSRF vulnerability for each of the MMF's. But in this case, as a product owner working with the team, the team may have thought we could come up with a single architecture solution and then prove the concept by implementing 1 CSRF vulnerable form from each of the architectures. The team then was able to self organize and come up with the best way to tackle this problem.

Now, when it comes to stories in the MMF, you want vertically sliced stories. What that means is, should I compete a story, the story by itself should be demo-able so we can learn from it. That means that if a story was to remediate a CSRF vulnerability in a single form, then the team would punch through from the form to the new architecture to yield CSRF vulnerability remediation in that single story.

If we had a poorly defined story, for example, that was only to establish the architecture, at the end of the iteration, we would probably have nothing to demo. Therefore we will probably not learn anything until the next iteration, or perhaps the iteration after that. And in that case, even if the teams velocity was 30, 90 or 1 million - the actual velocity is a big fat ZERO because nothing of value came out of the iteration. After all, in this example, the value is not an architecture, but it is the remediation of a CSRF vulnerability in a form. Sure, maybe we could have shipped the architecture to production, but if nothing is going to use that architecture, then there is no value and we should not consider ourselves to be successful, yet performing. Now on the other hand if we actually remediated a single form from the CSRF vulnerability, that is the value we are after.

Don't think of a project as being successful if only the entire thing is remediated from its CSRF woes, but every single form you touch and remediate is a big big win - and best of all, you get to prove that what you are doing is working. And if not, in your learning, you pivot and adapt fast so the rest of your project goes even better.

We can work as hard as we want and produce tons of code and stories. But if at the end of the story, when we reach our definition of done for that story, if the value does not exist - then why does the velocity even matter? Simply, it does not for the sake of delivering value to a business - which is why we all exist in the first place.

Not sold? I challenge you. Send me any project you want to get off the ground, and I will show you how you can generate quality MMF definitions and get MMF value each and every iteration.

First job of 2019... clean my at home work desk, reorganize and make it more comfy. Got a back massager for x-mas that I installed on my work chair too!

No Comments Yet.

Leave a comment

You must be Logged in to post a comment.