Project Quality in IT - How to Make Sure You Will Get What You Want?

Table of content:

  1. Quality as a business advantage
  2. But why put emphasis on quality?
  3. But first off - what is quality?
  4. Now, how can you perceive that parameter as a manageable factor?
  5. Managing quality and quality control in IT projects

Quality as a business advantage

Some time ago I started to notice that quality as a term comes out increasingly often as a buzzword in sales conversations and negotiations. When it comes down to selecting a tech partner, it is important to consider their strengths and their competitive strategy. Naturally, as a result of political and economical factors, the CEE region has strong ties with Western Europe capital and business and thus functions as a subcontractor. To stand out as a dependable and versatile provider, it is, again, crucial to make your advantages outshine potential weaknesses like different culture code, language, and timezone barriers or delegation of duty throughout the new team.

Michael Porter, the creator of generic strategies theory, would advise to avoid a tough war of attrition with local specialized teams and competitors focused on cost leadership and “hedge your bets” and focus on cost and differentiation. In order to get there, one has to have a firm grip on processes that work flawlessly in the company to maintain cost discipline and a creative approach to new business to be able to attract interesting deals. I would reckon that in most cases of software houses/IT contractors, it would be about skillful product/project management department and the ability to select proper frameworks to manage properly. One could detect high product/project management standards via externals certifications or successful businesses based on solutions provided. However, those proofs can be fooled, bamboozled or even worse - flimflammed. The acid test that ultimately confirms that knowledge is, in my opinion, approach to quality.

But why put emphasis on project quality?

If we map out the stakeholders’ expectations towards the deal, we would know that in most cases the principal seeks maximization of value acquired via the deal which derives from conditions of triple constraint whilst the provider seeks maximization of gratification with respective skepticism towards negotiating the triple constraint. In other words, one wants to have it done well, fast, cheap and tricked-out whereas the opposition knows it cannot be done and has to compromise. At first, it looks like a stalemate as for the principal the scope and the budget are to be held non-negotiable, according to the rule that all is important. After that, comes a round of trimming off artsy functionalities. This process is obviously painful for stakeholders and, at this very moment, an idea to hold the quality of the product as collateral starts to tempt. Can one yield to a false assumption that “You don’t have to do it right. Just get it done and we’ll worry about that later!”? Yes - it is possible to spawn a prototype to verify business hypotheses and then move onto proper development. Is there a temptation to do it sloppily in order to charm stakeholders with low-hanging fruits? Yes. Will it only take off the direct pressure instilled by them and, in the long-term, cause a major outage or destroy the business? Also yes.

But first off - what is quality?

In the case of project management, it is not a grade of excellence of a certain product, but rather “The degree to which a set of inherent characteristics fulfills requirements” as ISO 9000 standard states.

To put it as simply as possible - Chinese noodles in a box are not proper for a formal meal just like a triple Michelin star restaurant is not a good idea for a get-together with your colleagues.
It’s all about the adequacy of expectations and meeting them with a proper set of functionalities that characterize the product.

Can you perceive that parameter as a manageable factor?

The end-user of a software is always interested in its dependability and will be eager to trade a scarce but dependable software for one that is rich in features but cannot be trusted, even if preemptive models and focus groups indicate otherwise. If we start to treat quality as something we can decrease in order to meet other criteria of given project constraints, questions should arise about addressing potential but most probable poor/decreased quality costs such as:

  • analyses, inspections, and reworks that are, of course, billable by someone,
  • redesigns,
  • price downgrades,
  • downtimes of software,
  • adjustments to complaints,
  • lost opportunities because of poor quality that repels customers.

The list is rather long. To make things worse, these are supplemented by hidden tolls, which might be more subtle yet no less deadly. Those include extra material/manpower purchased, hosting space charges, costs of errors in support operations, etc. They pile up and if unnoticed/unforeseen, can multiply and easily surpass direct poor quality costs and in effect tank the project or organization financially.

There are two ways to act upon that knowledge. One could organize their own study to find the exact ratio between the cost of poor quality and potential value increase from balancing other project parameters and find the optimum himself - it will require additional resources and time and will pry open doors. The second approach aims to avoid that by accepting that only very specific marketing strategies ( such as price leadership ) can justify exploring low-quality product niches and thus work towards bringing total quality costs to a minimum by working on the quality of conformance.

Managing quality and quality control in IT projects

Understanding that quality is interlinked with triple constraint is crucial to planning a project. In most cases, it’s the end-user of the software who will be most affected by its successful performance or terminal failures and thus his satisfaction or dissatisfaction should be considered primary objectives whilst planning out functionalities. I had used plural form deliberately - according to J.M Juran, these are not opposite. Satisfaction comes from those features which encourage end-users to use and buy the product whereas dissatisfaction takes its root in errors and shortcomings of said products and in effect, causes rejection of a product. Aiming towards a highly satisfactory set of functionalities and ignoring quality of software as a whole will end up in customers’ high churn and transitioning to other products with less flashy functionalities yet no technical deficiencies.

The trivia of customers’ expectations as the main rationale in building software has been explored well by the real-life application of models like Kano or QFD, so I am going to skip to the bottom of it and mention universal observations. End-users might be highly interested in different product features over dependability and anticipate his satisfaction, but once familiar with technical shortcomings, churns and moves onto another tool. This dynamic should keep us away from trying to find the sweet spot between the number of delightful functionalities and the actual dependability of the system. It is a truism, after all, to keep it simple yet reliable but still, people have a tendency to fall for the bait of cutting corners or plead with reality by throwing cash at the problem to realize that it is the KISS principle that works every time.

Going into business examples seems non-essential to me right now - lack of quality that results in churn or painful and time-consuming conversations with the support team is something to be experienced sooner or later. I would focus on how to achieve planned quality instead and help to understand that this field can be tamed with little effort and discipline.

In any nature of work there will be 3 processes that are managed and used in order to control the quality:

  • Quality planning responsible for the aforementioned needs of the end-user, their proper identification, and creation of processes/techniques to produce products accordingly;
  • Quality control understood as a set of processes to compare planned quality goals with actual performance and procedures to act upon tolerance breaches;
  • Quality improvement as a set of processes to solidify changes aimed towards reaching excellence.

Let us investigate them a bit.

In quality planning, it is crucial to find a base for setting quality goals. They can and should be based on current technology stack, market, benchmarking, and history, so they fulfill SMART characteristics ( “ Our goal is to create an application for first aid responders that works flawlessly and is able to determine the nature of injury within a matter of seconds” ). The more specific, the better. From there, some quality parameters can be derived. The extent to which they’re fulfilled would imply the realization of the goal. For example, flawlessness of work can be measured by performance ( 99% certainty of proper injury assessment under 2 minutes of app usage ); Utility of features ( embodied by subjective opinions and test by end-users and determined by focus group survey). These tangible conditions will surely help the team with understanding user’s expectations and creating proper testing methods to provide accordingly.

However, business goals and their quality derivatives is a language that is understandable to different stakeholders throughout the business environment, yet it lacks the depth of technical knowledge - only the field experts will know what kind of technical obstacles have to be overcome and what level of process control will ensure expected results. They have to create a set of non-functional requirements that will complement business goals. In the case of IT and IT-related products, the NFRs usually stand for security measures, compatibility between hardware, and scalability. All of them imply probable business scenarios and the success of the product in those cases derives from the quality of the features - carrying the previous example with an interface for first aid, NFRs imply high performance of the app embodied by a low number of errors per 100 actions and 50ms response time from the server as expectations.

Bringing business goals and NFRs together will provide us with a good bearing on what has to be tested. Now it is time to deploy processes and techniques to create the product up to the right quality. There is a lot to understand about a process design and a lot of literature published therefore I will restrain myself to basics.

When creating a process of quality control, one must take into consideration effectiveness and efficiency measured in cycle times and output rates. Therefore, the process must be balanced between the speed of the test and its accuracy, be a part of a short feedback loop, and promote quick coercive action. It should also put emphasis on technological conformance with listed expectations towards the product. In IT scenarios, and especially lean-oriented methodologies of product development, the dynamic between time, quality, and costs echoes once again.

Repeatable processes in a factory are much easier to control by automation. Unfortunately for us, project managers, the repeatability of projects is very low and thus, fewer testing procedures can be automated in comparison. Procedures for handling deviations of quality have to be introduced and discussed with stakeholders - what kind of quality deviation would force the project team to rework a feature? What are the project consequences for the budget in case of high costs resulting from poor quality?

First, quality checks have to be forced upon the requirements of the product. Feature requests and tasks have to be created according to an agreed template which outlines the expected outcome of work and helps out to minimize risks of reworking. In user-oriented products, the product teams often use User Stories and Epics to communicate with stakeholders.

Moving on to the assembly line, some parts of quality goals can be fulfilled by catching errors via automated testing procedures like unit/integration tests. Depending on their complexity, coverage, technology, and numbers, they can span from very quick to completely paralyzing for the work of a code developer. They are introduced as a tool to enable code developers to run them in the background and check whether the code works technically. Finally, to merge new code into further environments, CI procedures are introduced to minimize error occurrence. However, as mentioned earlier, to keep the planned quality of the code, some of the principals have to be introduced in a form of a manual check - these often reflect the professionalism of a developer. Manual checks with check-lists help out with environment variables and routine actions that can be omitted by error if a developer does not pay attention. These are usually supported by a procedure of code review done by a peer and automated audits of code.

Auditing quality control processes can help us assess whether our quality goals are being met and whether the processes we created contribute to delivering a successful product. Quality deviation might be caused by multiple factors - poorly written requirements, bad estimation vs actual time of completion, bad testing procedure, technical complexity or a combination of them.

Naturally, these aforementioned techniques cover only compliance with technological expectations - business application and marketability of the product has to be controlled at a higher plane. Once technical compliance is checked, User Acceptance Testing should occur so that the portion of the code reflects actual needs and requirements and is tried against real users and real business scenarios. It’s a great tool to make sure that developers understand the requirements appropriately and no key stakeholders are surprised whenever a new version of software hits the market. On the other hand, UAT requires creating test scenarios and strict rules of acceptance - it does require going the extra mile but it also minimizes chances of faulty code deployment. Also, it’s crucial to keep to the declared scope and have most of the criteria defined beforehand to avoid scope creep.

UAT as a routine complements quality improvement - supervisors can observe and assess UAT as a complete and separate process and thus work towards its excellence. Bottlenecks, delays, reworks, and lead times will point out that the process is at fault or whether it can be ameliorated to keep to agreed project parameters. However, problems in the UAT stage, similar to previous stages, can have different reasons ( for example, bottlenecked testing can be a direct effect of poorly written testing scenarios, laziness of a tester or NFRs not being implemented or all of them at once) therefore should be examined with caution.

All those processes should cover quality management in an IT project and make sure you get what you deserve. ;)

Resources: Juran’s Quality Handbook 5th edition, 1999 ISO9000 Project Management Book of Knowledge 6th Edition, 2017, p. 271-305

PS: If you find this article interesting, be sure to check another one about stress in project management.