What I do
Hint: I’m more than a software developer
Software delivery is hard. There are many tiny details that need to be considered, technologies to master, and business cases to learn. Delivering working software is a huge challenge.
It’s also expensive. A small software development team (2 good developers and a project manager) can easily cost $250,000/year.
Finally, it’s risky. Studies have shown that most projects are 6 to 12 months behind schedule and 50 to 100% over budget.
We try to deliver software:
- With high quality
- On budget
- On time
I present topics and give talks to help developers and Agile Managers hit those goals.
This site serves as an information resource for .NET Developers and Agile Managers to get a glimpse into how I work in my capacity as an SDLC auditor. This serves as a starting point for further conversation.
On the .NET Developer side, I focus on two topics:
- Staying on track
- Keeping the quality high
- Working with management
On the Agile Management side:
- Be efficient (or ruthless)
- Measure and adjust
There are also some fun articles on the .NET Micro Framework and embedded hardware programming. It’s a side hobby of mine and I love sharing what I’ve learned.
I have lived in the Boston area for 17 years. I love Boston and it’s people. However, I have decided to take a job with Intersys Consulting in Austin, TX. This is a major change for me and my wife. I wish everyone well and hope to see you all again soon. We will be in Austin starting July 28th. Please wish us safe travels as we start driving on the July 25th.
We’re looking forward to living a in more compact city with out the hassles of snow. We are looking forward to trading moving from one heated container to anther in the winter… to moving from one air-conditioned container to another in the summer.
Moving to Austin will open up all new opportunities for us. I look forward to taking advantage of all the city has to offer.
Please plan to visit us… soon!
Software delivery is hard. There are many tiny details that need to be considered, technologies to master, and business cases to learn. Delivering working software is a huge challenge. It’s also expensive. A small software development team (2 good developers and a project manager) can easily cost over $300,000/year. Finally, it’s risky. Studies have shown that most projects are 6 to 12 months behind schedule and 50 to 100% over budget.
Solid agile practices aim to solve these issues. Teams are more likely to deliver high quality software on-time and on-budget by using the roles, transparency, cultural expectations, ceremonies, and prescriptive technical practices. Today, software teams have an unprecedented ability to build high quality products quickly. However, most agile practices de-
emphasize a key component of software delivery; constraint identification and resolution.
In this context, a constraint is anything which impedes the velocity of the development team. Constraints can be broken down in to three categories:
In this article, I will discuss the techniques and metrics I use to identify and remove QA constraints using Quality Tracing in Agile software development.
Any user (or system) reported defect is the entire team’s responsibility. Many organizations use QA departments as a final defense against releasing defects. However, justifying the cost of a QA is often a challenge. It’s hard for the business to sponsor QA staff when defects slip through anyway. Still, development organizations champion the need for QA staff. Making the case, measuring performance, identifying constraints, and targeting QA for remediation is an important part of technical leadership.
Defects come in many forms; critical crashes, regressions, calculation errors, UI defects, and many others. QA is a role which demands attention to detail to save the customer from frustrating experiences caused by defects. However, there has never been an easy way to measure the “false negative” nature of QA efforts prior to release. Quality tracing is one of many metrics I use when managing software delivery to test and verify QA is finding existing defects.
Quality tracing (QT) is the practice of intentionally introducing findable defects to a product during an iteration. Then, we measure how many defects are found by the QA staff in that iteration. All unfound defects are corrected and QT scores reported to the team at the end of the iteration. In short, it’s the gamification of their role. Figure 1 is an example of a single QA team’s QT score over 15 iterations.
In the spirit of complete transparency, the entire team was completely aware of the process. Everyone was informed. Week one showed that zero defects were found. Weeks 2 – 5 are excellent examples of the Hawthorne (or observer) effect. People will change their behavior when they know they’re being observed and those observations are reported. In this case, they improved only slightly.
A low QT score can be interpreted several ways. First, it could be due to a poor understanding of the product and the requirements. Second, QA automation practices might need to be assessed. Lastly, poor attention to detail could be the cause. For this team, we approached the issue by focusing on requirements understanding and QA automation. During the entire time, I coached the team on the need for a ruthless attention to detail and the value of raising questions.
At week 4, we pulled QA from the back of the development process to the front. Meaning, QA members took ownership of working with business units and developers to write the specifications. This caused a dramatic increase over the next 2 iterations as QA staff became much more aware of the nuances of the product features there were expected to test and accept.
I knew we had a problem with a lack of QA automation by looking at the iteration flow charts as seen in Figure 2. This chart is a one week iteration where stories are marked as accepted once they have been passed by QA. Ideally, all stories should be accepted by day 7. However, at day 7, we can see 50% of stories are still waiting to get through QA. This team had almost no QA automation in place.
In figure 1, weeks 7 – 10 show a dramatic increases in the QT score due to the implementation of QA automation. It shows the business was starting to see the value of these changes as user reported defects started dropping dramatically.
Week 12 we added some particularly hard defects having to deal with combinatorial workflow logic. We were not surprised to see a failure. However, we did have a long conversation on the definition of a “findable defect” and what could be reasonably expected to be experienced by users. As a result of our discussions, we strengthened our CI and QA test suites dramatically. This resulted in our first 100% QT score.
At week 15 we lost one of our beloved QA team members. She was poached by another company. Losing that institutional, technical, and process knowledge had an immediate and observable effect on the team in terms of QT score and morale.
The result was much more than simply raising the QT score. In the beginning the QA members were thought of as “gatekeeper” who needed to be appeased to approve release. This caused confusion with business leaders who didn’t understand why users continued to report defects. They saw no value in QA staff.
By week 15, there was a dramatic decrease in user reported defects. The QA members were valuable repositories of business knowledge. They worked with the business to define non-technical requirements and provide ideas for future features. In addition, this took the burden off business and development staff to document requirements. Planning meetings and story grooming session became shorter and smoother by having documentation which came from the business but written for developers. This smoothing of the business-developer interaction cannot be understated in this case. Lastly, there was a demonstrable loss in productivity when we lost a QA member. This reinforced the value of QA staff. Business leaders can now directly see the effect of losing QA staff and quality practices on their customer satisfaction as measured by user reported defect flow.
Windows Presentation Foundation (WPF) is the de-facto Windows desktop technology. WPF allows the rapid creation of desktop applications with amazingly complex functionality. The hard part has always been to make desktop application look good. Functionality may be excellent, but the square controls and gunmetal grey colors don’t have any wow-factor.
I’ll show you how to add the WPF skins from Syncfusion to your UI. It involves a few easy steps.
First, I’m assuming you’ve added Syncfusion WPF NuGet sources to Visual Studio. I’ve explained that in a previous post: Syncfusion NuGet Sources.
- Add the Syncfusion WPF Tools to your project
- Update App.xaml to include the theme
- Change your MainWindow class to a Chromeless window
- Update MainWindow.xaml to set the VisualStyle of the theme
Add the Syncfusion WPF Tools to your project
For this example, I’m going to use the Blend style. You need to add a resource dictionary source to the theme you want.
Change your MainWindow to ChromelessWindow
Open the MainWindow.xaml.cs file and change the inheritance to ChromelessWindow. You’ll need to add two references:
Update the MainWindow.xml to set the VisualDisplay to the desired theme
Run your application. You’ll see all the controls have now changed to use the Syncfusion theme. Reach out to me if you run into problems. Happy coding with WPF.