What I do

Hint: I’m more than a software developer

Software delivery is hard. There are many tiny details that need to be considered, technologies to master, and business cases to Continuous Integration Workflowlearn. Delivering working software is a huge challenge.

It’s also expensive. A small software development team (2 good developers and a project manager) can easily cost $250,000/year.

Finally, it’s risky. Studies have shown that most projects are 6 to 12 months behind schedule and 50 to 100% over budget.

We try to deliver software:

  1. With high quality
  2. On budget
  3. On time

I present topics and give talks to help developers and Agile Managers hit those goals.

This site serves as an information resource for .NET Developers and Agile Managers to get a glimpse into how I work in my capacity as an SDLC auditor. This serves as a starting point for further conversation.

On the .NET Developer side, I focus on two topics:

  • Staying on track
  • Keeping the quality high
  • Working with management

On the Agile Management side:

  • Be efficient (or ruthless)
  • Measure and adjust

There are also some fun articles on the .NET Micro Framework and embedded hardware programming. It’s a side hobby of mine and I love sharing what I’ve learned.



Latest Post

The Project Owner

One person can make a difference

I’ve spent the last 13 years working in, managing, or building Agile teams. After nearly two decades in the Boston area, I’ve moved to Austin to be closer to my family. This has given me time for introspection regarding what has made certian teams more successful than others.

I’ve worked with many Agile teams. To be honest most have been average in success. They follow many of the Agile (or Scrum) principles and ceremonies. None the less, they still manage to maintain the status quo. They’re merely assigning a name , Agile, to their process.

Three teams have stood out as exceptional during my 24 years of hands-on-the-keyboard software development and 13 years of Agile experience. I’ve spent quite a bit of time looking at data collected and thinking about those teams and what made them successful. I’ll be writing posts about the various qualities which made them great.

First, let me define what I mean by success. In this case, a successful team:

  • Delivers measurably high quality software
  • Delivers measurably significant business value
  • Delivers on-time as promised to the business

Note, that I didn’t say anything about group cohesion, happy hours, work-life balance, ping-pong tournaments won, or anything else outside of the iron fact that we’re hired to deliver a project. Some organizations are completely happy as long as the business isn’t on fire. This is not what I’m talking about here. I’m focusing on teams which are expected to drive revenue and growth for the business.

The #1 thing the successful teams have is one identified, assigned, dedicated Product Owner (PO). Let’s break that down.


The Product Owner; there can be only one. It’s this single person’s responsibility to know what to build. They know the customer and advocate for their needs. They know the use-cases. They work with the development team to think through all the little details of getting the software built. They are the de-facto resposititory for everything the software does or needs to do.


Everyone knows who the Product Owner is. They have a name, a desk, an email account. They attend all the meetings and are always present when the team needs them.


This person has been told being the Product Owner is their job. They understand the responsibilities. Meeting those responsibilities is a requirement for continued employment.


They are only working on this one project. It’s all they think about. It’s their sole responsibility.

When I think about the worst failing teams, they never had a Product Owner. They would try one of the following unsuccessful approaches:

  • Team ownership
    • Yes, the team collectively owns the code. They shouldn’t be responsible for the business value too. Let developers focus on writing great software.
  • Non-development distriobuted ownership
    • This is the “multiple Product Owner” strategy. This is usually the case where people want to give direction and have oversight but they don’t want responsibility or do the day-to-day work. This is an especially insidious problem in larger organizations.

I won’t guarantee that having a Product Owner will result in success. However, I can say from experience that without a Product Owner you will likely fail to be successful. You may muddle along. You may keep the lights on. You make keep this business running enough to be a lifestyle company where everyone gets to leave at 4pm on Friday. But you’ll never really make a difference.

Moving to Austin

I have lived in the Boston area for 17 years. I love Boston and it’s people. However, I have decided to take a job with Intersys Consulting in Austin, TX. This is a major change for me and my wife. I wish everyone well and hope to see you all again soon. We will be in Austin starting July 28th. Please wish us safe travels as we start driving on the July 25th.

We’re looking forward to living a in more compact city with out the hassles of snow. We are looking forward to trading moving from one heated container to anther in the winter… to moving from one air-conditioned container to another in the summer. :)

Moving to Austin will open up all new opportunities for us. I look forward to taking advantage of all the city has to offer.

Please plan to visit us… soon!

Quality Tracing in Agile

Quality Tracing

Software delivery is hard. There are many tiny details that need to be considered, technologies to master, and business cases to learn. Delivering working software is a huge challenge. It’s also expensive. A small software development team (2 good developers and a project manager) can easily cost over $300,000/year. Finally, it’s risky. Studies have shown that most projects are 6 to 12 months behind schedule and 50 to 100% over budget.

Solid agile practices aim to solve these issues. Teams are more likely to deliver high quality software on-time and on-budget by using the roles, transparency, cultural expectations, ceremonies, and prescriptive technical practices. Today, software teams have an unprecedented ability to build high quality products quickly. However, most agile practices de-
emphasize a key component of software delivery; constraint identification and resolution.

In this context, a constraint is anything which impedes the velocity of the development team. Constraints can be broken down in to three categories:

  • Business
  • Development
  • QA

In this article, I will discuss the techniques and metrics I use to identify and remove QA constraints using Quality Tracing in Agile software development.

Any user (or system) reported defect is the entire team’s responsibility. Many organizations use QA departments as a final defense against releasing defects. However, justifying the cost of a QA is often a challenge. It’s hard for the business to sponsor QA staff when defects slip through anyway. Still, development organizations champion the need for QA staff. Making the case, measuring performance, identifying constraints, and targeting QA for remediation is an important part of technical leadership.

Defects come in many forms; critical crashes, regressions, calculation errors, UI defects, and many others. QA is a role which demands attention to detail to save the customer from frustrating experiences caused by defects. However, there has never been an easy way to measure the “false negative” nature of QA efforts prior to release. Quality tracing is one of many metrics I use when managing software delivery to test and verify QA is finding existing defects.

QA Trace Results

Figure 1 – QA Trace Results by Iteration

Quality tracing (QT) is the practice of intentionally introducing findable defects to a product during an iteration. Then, we measure how many defects are found by the QA staff in that iteration. All unfound defects are corrected and QT scores reported to the team at the end of the iteration. In short, it’s the gamification of their role. Figure 1 is an example of a single QA team’s QT score over 15 iterations.

In the spirit of complete transparency, the entire team was completely aware of the process. Everyone was informed. Week one showed that zero defects were found. Weeks 2 – 5 are excellent examples of the Hawthorne (or observer) effect. People will change their behavior when they know they’re being observed and those observations are reported. In this case, they improved only slightly.

A low QT score can be interpreted several ways. First, it could be due to a poor understanding of the product and the requirements. Second, QA automation practices might need to be assessed. Lastly, poor attention to detail could be the cause. For this team, we approached the issue by focusing on requirements understanding and QA automation. During the entire time, I coached the team on the need for a ruthless attention to detail and the value of raising questions.

At week 4, we pulled QA from the back of the development process to the front. Meaning, QA members took ownership of working with business units and developers to write the specifications. This caused a dramatic increase over the next 2 iterations as QA staff became much more aware of the nuances of the product features there were expected to test and accept.

I knew we had a problem with a lack of QA automation by looking at the iteration flow charts as seen in Figure 2. This chart is a one week iteration where stories are marked as accepted once they have been passed by QA. Ideally, all stories should be accepted by day 7. However, at day 7, we can see 50% of stories are still waiting to get through QA. This team had almost no QA automation in place.

Iteration Flow Chart

Figure 2 – Iteration flow chart showing QA constraint

In figure 1, weeks 7 – 10 show a dramatic increases in the QT score due to the implementation of QA automation. It shows the business was starting to see the value of these changes as user reported defects started dropping dramatically.

Week 12 we added some particularly hard defects having to deal with combinatorial workflow logic. We were not surprised to see a failure. However, we did have a long conversation on the definition of a “findable defect” and what could be reasonably expected to be experienced by users. As a result of our discussions, we strengthened our CI and QA test suites dramatically. This resulted in our first 100% QT score.

At week 15 we lost one of our beloved QA team members. She was poached by another company. Losing that institutional, technical, and process knowledge had an immediate and observable effect on the team in terms of QT score and morale.

The result was much more than simply raising the QT score. In the beginning the QA members were thought of as “gatekeeper” who needed to be appeased to approve release. This caused confusion with business leaders who didn’t understand why users continued to report defects. They saw no value in QA staff.

By week 15, there was a dramatic decrease in user reported defects. The QA members were valuable repositories of business knowledge. They worked with the business to define non-technical requirements and provide ideas for future features. In addition, this took the burden off business and development staff to document requirements. Planning meetings and story grooming session became shorter and smoother by having documentation which came from the business but written for developers. This smoothing of the business-developer interaction cannot be understated in this case. Lastly, there was a demonstrable loss in productivity when we lost a QA member. This reinforced the value of QA staff. Business leaders can now directly see the effect of losing QA staff and quality practices on their customer satisfaction as measured by user reported defect flow.

Older Entries »