CS605 - Software Engineering II - Lecture Handout 16

User Rating:  / 0

Related Content: CS605 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Software Engineering II

Interpreting Measurements

A good metric system is the one which is simple and cheap and at the same time adds a lot of value for the management. Following are some of the examples that can be used for effective project control and management.

We can collect data about the defects reported, and defects fixed and plot them in the following manner, with their difference showing the defects yet to be fixed. This can give us useful information about the state of the product. If the gap between the defects reported and defects fixed is increasing, then it means that the product is in unstable condition. On the other hand if this gap is decreasing then we can say that the product is in a stable condition and we can plan for shipment.

Interpreting Measurements

Similarly, we can gain useful information by plotting the defects reported against the number of use cases run. We can use control lines from our previous data and see if the actual defects are within those control limits. If the defects at any given point in time are less than the lower limit then it may mean that out testing team is not doing a good job and coverage is not adequate. On the other hand, if it crosses the upper line then it indicates that the design and coding is not up to mark and we perhaps need to check it.

Interpreting Measurements 1

Another very simple graph as shown below can give a lot of insight into the design quality. In this case, if the frequency of ripple defects is too large, then it means that then there is tight coupling and hence the design is not maintainable.

Interpreting Measurements 2

The following is yet another very simple and effective way of getting insight into the

Interpreting Measurements 3

quality of the requirements. If a number of defects that are reported by the testing team are ultimately resolved as not-a-defect then there may be a sever problem with the requirements document as two teams (development and testing) are interpreting it differently and hence coming to different conclusions.